repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
4,192
closed
[Reformer] Doctsring: fix examples again
Forgot a fix.
05-07-2020 08:53:52
05-07-2020 08:53:52
transformers
4,191
closed
[Reformer] Fix example and error message
Fix typos and improve error message.
05-07-2020 08:36:55
05-07-2020 08:36:55
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=h1) Report > Merging [#4191](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/96c78396ce1baf5e19c0618689005f93c7f42d79&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4191/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4191 +/- ## ========================================== - Coverage 78.41% 78.41% -0.01% ========================================== Files 120 120 Lines 19785 19785 ========================================== - Hits 15515 15514 -1 - Misses 4270 4271 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.17% <ø> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (-0.17%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=footer). Last update [96c7839...488d787](https://codecov.io/gh/huggingface/transformers/pull/4191?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,190
closed
[Reformer] fix docstring
Fix small typo in docstring
05-07-2020 08:27:53
05-07-2020 08:27:53
transformers
4,189
closed
Bug: can not use pretrained BERT on multiple GPUs with DataParallel (PyTorch 1.5.0)
Python: 3.6.10 PyTorch: 1.5.0 Transformers: 2.8.0 and 2.9.0 In the following code, I wrap the pretrained BERT with a DataParallel wrapper so as to run it on multiple GPUs: > import torch, transformers > model = transformers.AutoModel.from_pretrained("bert-base-multilingual-cased") > model = torch.nn.DataParallel(model) > model = model.cuda() > input = torch.ones([16, 10], dtype=torch.long) > input = input.cuda() > model(input) But I got the following error: > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ > result = self.forward(*input, **kwargs) > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward > outputs = self.parallel_apply(replicas, inputs, kwargs) > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply > return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply > output.reraise() > File "/home/anaconda/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise > raise self.exc_type(msg) > StopIteration: Caught StopIteration in replica 0 on device 0. > Original Traceback (most recent call last): > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker > output = module(*input, **kwargs) > File "/home/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ > result = self.forward(*input, **kwargs) > File "/home/anaconda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 734, in forward > extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility > StopIteration > But it will work if I remove the DataParallel wrapper.
05-07-2020 04:24:22
05-07-2020 04:24:22
By the way, when I downgrade Pytorch 1.5.0 to 1.4.0, the error disappears. <|||||>The same issue: #3936<|||||>Closing in favor of #3936
transformers
4,188
closed
Fix Albert Attention
This PR simplifies the code of class AlbertAttention.
05-07-2020 02:09:45
05-07-2020 02:09:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=h1) Report > Merging [#4188](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/877fc56410e3a0495f62e07e66a73e6b3b9629bc&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4188/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4188 +/- ## ======================================= Coverage 78.10% 78.11% ======================================= Files 117 117 Lines 18962 18963 +1 ======================================= + Hits 14811 14813 +2 + Misses 4151 4150 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `75.38% <100.00%> (+0.06%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.92% <0.00%> (-0.13%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.32%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=footer). Last update [877fc56...7c64f23](https://codecov.io/gh/huggingface/transformers/pull/4188?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,187
closed
How to apply Torchtext convenience classes to prepare data for a Transformer?
Hello, Reading the tutorial [Language Translation with torchText](https://pytorch.org/tutorials/beginner/torchtext_translation_tutorial.html) I wondered how someone could use those convenience classes (`Field, BucketIterator`) to train/fine-tune a `Transformer`. For instance, I'm currently working with a large dataset distributed in jsonl files which looks like: ```python { "query": "this is a query 1", "doc": "relevant document regarding query 1" }, { "query": "this is a query 2", "doc": "relevant document regarding query 2" }, ... ``` Now, to forward this data into a transformer like Bert, it is necessary to convert this dataset into the format: ```python3 ( #queries { 'input_ids': tensor([ [ 101, 2023, 2003, 1037, 23032, 1015, 102, 0], [ 101, 2023, 2003, 1037, 23032, 1016, 102, 0]]), 'attention_mask': tensor([ [1, 1, 1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1, 0]]) }, #docs { 'input_ids': tensor([ [ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102], [ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102]]), 'attention_mask': 'input_ids': tensor([ [1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1]]) } ``` So, what would be a clear and efficient approach to apply those convenience classes to tokenize a text dataset to fit it in the required format of a transformer?
05-07-2020 01:12:27
05-07-2020 01:12:27
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,186
closed
Add patience argument to Trainer
This closes #4894. # Summary Often, we want to stop training if loss does not improve for a number of epochs. This PR adds a "patience" argument, which is a limit on the number of times we can get a non-improving eval loss before stopping training early. It is implemented by other NLP frameworks, such as AllenNLP (see [trainer.py](https://github.com/allenai/allennlp/blob/master/allennlp/training/trainer.py#L95) and [metric_tracker.py](https://github.com/allenai/allennlp/blob/1a8a12cd1b065d74fec3d2e80105a684736ff709/allennlp/training/metric_tracker.py#L6)). # Motivation This feature allows faster fine-tuning by breaking the training loop early and avoids users the toil of checking metrics on Tensorboard. # Caveats Often, models are evaluated once per epoch, but run_lm_finetuning.py has an option to evaluate after a set number of model update steps (dictated by `--logging_steps` if `--evaluate_during_training` is true). Because of this, I've elected to tie patience to the number of evaluations without improvement in loss.
05-06-2020 21:48:10
05-06-2020 21:48:10
This supercedes https://github.com/huggingface/transformers/pull/2840, where I added patience to the outdated `run_language_modeling.py` script.<|||||>Looking good! Can you add a reference to your original post that this closes https://github.com/huggingface/transformers/issues/4894? Thanks<|||||>Hello, when this feature will be merged? I would like to use it. Thank you.<|||||>> Hello, when this feature will be merged? I would like to use it. Thank you. There are some changes requested that @thesamuel should fix before this can be merged.<|||||>Bump. Early stopping is critical for an automated Trainer that reliably gives us the best model. Current way of figuring out the training stopping point seems to be specifying a static train_epochs but the training duration a model can take depends on way too many factors like learning rate, data complexity, model, model size, optimizer and so on that it is unreasonable to ask the user to specify the epochs in advance. I believe the current assumption is that people train with very small learning rates so that the loss always seems to keep decreasing very slowly but according to my experience (and on my data) it is a sub-optimal schedule which takes too much time. I see that training with higher learning rates with larger batch sizes and stopping at the early stopping point results in an equally good if not better models. Although this requires use of early stopping.<|||||>I would like to use this early stopping on downstream training. The current implementation only stops training by monitoring loss. IMO it should also be possible to monitor other metrics like F1 and ROC-AUC. I also would like to add a feature that stores the model each time when the monitored metric improves and then optionaly loads the model after training. Then later evaluation can be done on this "best" model. @thesamuel @julien-c @kevin-yauris what do you think?<|||||>I plan to work on this once I'm finished with the Funnel Transformer model @PhilipMay (so end of this week, beginning of the next).<|||||>> > > I plan to work on this once I'm finished with the Funnel Transformer model @PhilipMay (so end of this week, beginning of the next). @sgugger That would be awsome. Maybe you want to get some inspiration from the FARM training loop which is pretty nice IMO: https://github.com/deepset-ai/FARM/blob/master/farm/train.py#L262-L370<|||||>I just found this PR that was already merged: #7431 I think it solved this...<|||||>Not quite, but it makes implementing it easier.<|||||>> Not quite, but it makes implementing it easier. Yes - you are right. The patience part is still missing. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@sgugger Should we keep this open? You wrote in this thread you will work on this if you find the time, but I am not sure if you plan to use another PR for that.<|||||>There has been a PR merged adding the `EarlyStoppingCallback` (#8581) so I think this can be closed now.<|||||>Thanks @cbrochtrup @sgugger! Sorry I didn't get around to this...<|||||>You're welcome, happy to help!
transformers
4,185
closed
New model request: MobileBERT from Google
# 🌟 New model addition This issue is to request adding the MobileBERT model, which was recently released by Carnegie Mellon University, Google Research, and Google Brain. ## Model description MobileBERT is a more computationally-efficient model for achieving BERT-base level accuracy on a smartphone. <!-- Important information --> ## Open source status * [x] the model implementation is available: https://github.com/google-research/google-research/tree/master/mobilebert * [x] the model weights are available: [uncased_L-24_H-128_B-512_A-4_F-4_OPT](https://storage.googleapis.com/cloud-tpu-checkpoints/mobilebert/uncased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz) (MobileBERT Optimized Uncased English) * [x] who are the authors: Zhiqing Sun, Hongkun Yu (@saberkun), Xiaodan Song, Renjie Liu, Yiming Yang, Denny Zhou
05-06-2020 21:17:20
05-06-2020 21:17:20
https://github.com/lonePatient/MobileBert_PyTorch<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This was completed in #4901
transformers
4,184
closed
Model card for allegro/herbert-klej-cased-tokenizer-v1
05-06-2020 19:59:41
05-06-2020 19:59:41
transformers
4,183
closed
Model card for allegro/herbert-klej-cased-v1
05-06-2020 19:56:16
05-06-2020 19:56:16
Great work and great model cards, congrats on beating the state-of-the-art of Polish NLU!<|||||>Thanks!
transformers
4,182
closed
Is it possible to get document embeddings using GPT-2? If so, how?
# ❓ Questions & Help I posted a question here on SO: https://stackoverflow.com/questions/61641257/how-to-get-document-embeddings-using-gpt-2?noredirect=1#comment109035633_61641257 But unfortunately was informed that "Please notice that SO is not a tutorial service, and also that recommendation requests for external resources are explicitly off-topic." ## Details I'm curious if using GPT-2 might yield a higher accuracy for document vectors (with greatly varying length) or not (would it surpass the state of the art?) Really I'm most interested in document embeddings that are as accurate as possible. I'm wondering if using GPT-2 will get results that are more accurate than Paragraph Vectors for example. I heard that in order to get vectors from GPT-2 "you can use a weighted sum and/or concatenation of vector outputs at its hidden layers (typically the last few hidden layers) as a representation of its corresponding words or even "meaning" of the entire text, although for this role BERT is used more often as it is bi-directional and takes into account of both forward and backward contexts." As a machine learning and NLP beginner, I'd love to know how to go about this, or to be pointed in the right direction to learn more about how to attempt this in Python. I've tried fine-tuning GPT-2 before but I have no idea how to extract vectors from it for text. **A link to original question on Stack Overflow**: https://stackoverflow.com/questions/61641257/how-to-get-document-embeddings-using-gpt-2?noredirect=1#comment109035633_61641257
05-06-2020 19:33:12
05-06-2020 19:33:12
Not from Huggingface, but for this, I would use something like SentenceBERT. As far as I know.<|||||>@moinnadeem Thank you for pointing me in that direction! But would sentencebert really work for documents?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,181
closed
[Marian] Key-Error for some languages
model -> symbol that caused `KeyError`. ```python {'ha-en': '|', 'ber-es': '▁Be', 'pis-fi': '▁|', 'es-mt': '|', 'fr-he': '₫', 'niu-sv': 'OGI', 'fi-fse': '▁rentou', 'fi-mh': '|', 'hr-es': '|', 'fr-ber': '▁devr', 'ase-en': 'olos'} ``` Reproduce ```python pair = 'ber-es' mname = f'Helsinki-NLP/opus-mt-{pair}' tok = MarianTokenizer.from_pretrained(mname) tok.prepare_translation_batch(['Bessif kan ay aɣ-d-iṣaḥ wakud akken ad necc imensi.']) ``` Traceback: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-55-5da2d9c189be> in <module> 2 mname = f'Helsinki-NLP/opus-mt-{pair}' 3 tok = MarianTokenizer.from_pretrained(mname) ----> 4 tok.prepare_translation_batch(['Bessif kan ay aɣ-d-iṣaḥ wakud akken ad necc imensi.']) ~/transformers_fork/src/transformers/tokenization_marian.py in prepare_translation_batch(self, src_texts, tgt_texts, max_length, pad_to_max_length, return_tensors) 145 max_length=max_length, 146 pad_to_max_length=pad_to_max_length, --> 147 src=True, 148 ) 149 if tgt_texts is None: ~/transformers_fork/src/transformers/tokenization_utils.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, max_length, stride, truncation_strategy, pad_to_max_length, is_pretokenized, return_tensors, return_token_type_ids, return_attention_masks, return_overflowing_tokens, return_special_tokens_masks, return_offsets_mapping, return_lengths, **kwargs) 1729 ids, pair_ids = ids_or_pair_ids, None 1730 -> 1731 first_ids = get_input_ids(ids) 1732 second_ids = get_input_ids(pair_ids) if pair_ids is not None else None 1733 input_ids.append((first_ids, second_ids)) ~/transformers_fork/src/transformers/tokenization_utils.py in get_input_ids(text) 1697 if isinstance(text, str): 1698 tokens = self.tokenize(text, add_special_tokens=add_special_tokens, **kwargs) -> 1699 return self.convert_tokens_to_ids(tokens) 1700 elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str): 1701 return self.convert_tokens_to_ids(text) ~/transformers_fork/src/transformers/tokenization_utils.py in convert_tokens_to_ids(self, tokens) 1340 ids = [] 1341 for token in tokens: -> 1342 ids.append(self._convert_token_to_id_with_added_voc(token)) 1343 return ids 1344 ~/transformers_fork/src/transformers/tokenization_utils.py in _convert_token_to_id_with_added_voc(self, token) 1349 if token in self.added_tokens_encoder: 1350 return self.added_tokens_encoder[token] -> 1351 return self._convert_token_to_id(token) 1352 1353 def _convert_token_to_id(self, token): ~/transformers_fork/src/transformers/tokenization_marian.py in _convert_token_to_id(self, token) 90 91 def _convert_token_to_id(self, token): ---> 92 return self.encoder[token] 93 94 def _tokenize(self, text: str, src=True) -> List[str]: KeyError: '▁Be' ```
05-06-2020 17:54:42
05-06-2020 17:54:42
marian C++ code uses `<unk>` in this case
transformers
4,180
closed
considering empty input text to avoid string index out of range error
When the input string is empty, the above condition in the tokenization_roberta will encounter string index out of range error. For example, a pair input string exists in the QQP dataset that has an empty string: {'idx': 362246, 'label': 0, 'question1': b'How can I develop android app?', 'question2': b''}
05-06-2020 17:10:12
05-06-2020 17:10:12
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,179
closed
Create README.md
model card for my De Novo Drug discovery model using MLM
05-06-2020 15:39:33
05-06-2020 15:39:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=h1) Report > Merging [#4179](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ff8ed52dd8c6268f2535c0721cde9e360fbb0ce0&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4179/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4179 +/- ## ======================================= Coverage 78.79% 78.79% ======================================= Files 114 114 Lines 18712 18712 ======================================= Hits 14745 14745 Misses 3967 3967 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=footer). Last update [ff8ed52...c11fb93](https://codecov.io/gh/huggingface/transformers/pull/4179?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Really cool. Do they have a downstream task that we can fine-tune on, and get some eval results?<|||||>I will check it again and will let you know. I am also working on a Molecule Drug Transformer for drug target interaction system. It is more powerful and they provide several downstream tasks and evaluation datasets. Something like this: https://arxiv.org/abs/2004.11424
transformers
4,178
closed
[Model Cards] Add 1010 model cards for Helsinki-NLP
- Generated using tools in `convert_marian_to_pytorch.py`. - Takes the bottom most entry in each opus-mt-train/models/*/README.md. This assumes that the bottom entry is most recent. (Created by @jorgtied). - Example below is at `model_cards/Helsinki-NLP/opus-mt-fr-en/README.md` __________________________ # opus-2020-02-26.zip * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdiscussdev2015-enfr.fr.en | 33.1 | 0.580 | | newsdiscusstest2015-enfr.fr.en | 38.7 | 0.614 | | newssyscomb2009.fr.en | 30.3 | 0.569 | | news-test2008.fr.en | 26.2 | 0.542 | | newstest2009.fr.en | 30.2 | 0.570 | | newstest2010.fr.en | 32.2 | 0.590 | | newstest2011.fr.en | 33.0 | 0.597 | | newstest2012.fr.en | 32.8 | 0.591 | | newstest2013.fr.en | 33.9 | 0.591 | | newstest2014-fren.fr.en | 37.8 | 0.633 | | Tatoeba.fr.en | 57.5 | 0.720 |
05-06-2020 15:32:45
05-06-2020 15:32:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=h1) Report > Merging [#4178](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ff8ed52dd8c6268f2535c0721cde9e360fbb0ce0&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4178/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4178 +/- ## ========================================== - Coverage 78.79% 78.79% -0.01% ========================================== Files 114 114 Lines 18712 18712 ========================================== - Hits 14745 14744 -1 - Misses 3967 3968 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=footer). Last update [ff8ed52...dc9a6bc](https://codecov.io/gh/huggingface/transformers/pull/4178?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I am working on a NMT en to es following this format https://github.com/facebookresearch/XLM/#iii-applications-supervised--unsupervised-mt Is there any way to integrate my final model on the HUB? (script for conversion or smth like that)<|||||>This creates a lot of files but I think it's fine. Thoughts?<|||||>I went through all the 995 files and they look good<|||||>> I went through all the 995 files and they look good haha<|||||>> This creates a lot of files but I think it's fine. Thoughts? I think it's fine as well!<|||||>fwiw, it's 5MB of data. Previously, model_cards was 1MB of data. I think it's fine to merge @julien-c <|||||>Gunna merge this 7pm EST barring objections @julien-c <|||||>Can you add them to the S3 bucket instead of the repo? I'll add code tomorrow to display them on the website from the repo (not currently the case)<|||||>Sure, will add once the group naming question is resolved.
transformers
4,177
closed
Not able to import certain packages
# 🐛 Bug ## Information - I am using `run_glue.py`. - Working with GLUE ## To reproduce Steps to reproduce the behavior: Even though these functions are imported in `__init__.py` (I have thoroughly checked myself), I'm not able to import `EvalPrediction`, `HfArgumentParser`, `Trainer`, `TrainingArguments` ## Expected behavior Should be able to import them. I'm sharing this [colab notebook](https://colab.research.google.com/drive/1CqWzt9A96iUT2vh47qwnMSDMsQHQkRSx) link. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8 - Platform: Ubuntu 18.04 - Python version: 1.5 - PyTorch version (GPU?): 1.5 - Tensorflow version (GPU?): NA - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
05-06-2020 13:53:02
05-06-2020 13:53:02
You need to make sure you install from source as documented in the README
transformers
4,176
closed
ONNX conversion script.
This PR adds conversion script to export our models to ONNX IR. Plan is to support both PyTorch and TensorFlow: - [x] PyTorch - [ ] TensorFlow TensorFlow currently blocked because of an issue in the conversion script provided by ONNX (seems fixed on master : https://github.com/onnx/tensorflow-onnx/issues/876)
05-06-2020 11:36:12
05-06-2020 11:36:12
transformers
4,175
closed
args.output_dir seems like been ignored
https://github.com/huggingface/transformers/blob/a638e986f45b338c86482e1c13e045c06cfeccad/examples/run_squad.py#L814 It seems that args.output_dir is been ignored in checkpoints when args.eval_all_checkpoints is True. It probably should be like this: global_step = checkpoint.split("-")[-1] if (len(checkpoints) > 1 and checkpoint != args.output_dir) else ""
05-06-2020 09:35:13
05-06-2020 09:35:13
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,174
closed
Make ElectraPreTrainedModel importable
# 🚀 Feature request Make `ElectraPreTrainedModel` importable ## Motivation Consistency. `DistilBertPreTrainedModel` and `AlbertPreTrainedModel` are importable but I cannot import `ElectraPretrainedModel` from v2.8.0 (see https://github.com/huggingface/transformers/issues/1968) ## Your contribution https://github.com/huggingface/transformers/pull/4173
05-06-2020 07:30:33
05-06-2020 07:30:33
Thanks for contributing #4173 :).
transformers
4,173
closed
Include ElectraPreTrainedModel into __init__
https://github.com/huggingface/transformers/issues/4174
05-06-2020 07:23:22
05-06-2020 07:23:22
transformers
4,172
closed
ImportError: cannot import name 'AutoModel' from 'transformers'
# 🐛 Bug (Not sure that it is a bug, but it is too easy to reproduce I think) ## Information I couldn't run `python -c 'from transformers import AutoModel'`, instead getting the error on the titile. ## To reproduce Steps to reproduce the behavior: 1. `$ sudo docker run -it --rm python:3.6 bash` 2. `# pip install tensorflow==2.0 transformers==2.8.0` 3. `# python -c 'from transformers import AutoModel'` ``` Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: cannot import name 'AutoModel' ``` Initially I got this error with `transformers-cli download` : ``` # transformers-cli download t5-large Traceback (most recent call last): File "/usr/local/bin/transformers-cli", line 32, in <module> service.run() File "/usr/local/lib/python3.6/site-packages/transformers/commands/download.py", line 29, in run from transformers import AutoModel, AutoTokenizer ImportError: cannot import name 'AutoModel' ``` ## Expected behavior no import error. ## Environment info - `transformers` version: 2.8.0 - Platform: Linux-4.15.0-99-generic-x86_64-with-debian-10.3 - Python version: 3.6.10 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.0.0 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no Also tested on 3.6@Ubuntu 18.04, 3.8@Docker(python:3.8). The Docker backend is Ubuntu 18.04. Thank you.
05-06-2020 07:06:03
05-06-2020 07:06:03
Got the same result with 2.9.0, i.e. with `# pip install tensorflow==2.0 transformers==2.9.0`<|||||>`AutoModel` is the equivalent of `TFAutoModel` but for PyTorch model classes. If you don't have pytorch installed this is expected. Use `TFAutoModel` instead =) <|||||>Thank you!<|||||>@julien-c But there still is a problem with `transformers-cli`. Now, people who want to use `transformers-cli download` are required to install PyTorch even when they use Tensorflow only. edit: It could be the problem specific to `t5-large`. I'll try it later.<|||||>I have pytorch installed but still show the error: `ImportError: cannot import name 'AutoModelForSequenceClassification'` Environment info: * Ubuntu 16.04.6 * Python 3.6.8 * Torch 1.2.0 * CUDA 10.0 * transformers 4.8.2<|||||>@JTWang2000, even without PyTorch you should be able to import `AutoModelForSequenceClassification`: ```py Python 3.8.6 (default, Dec 4 2020, 09:21:54) [GCC 10.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'torch' >>> from transformers import AutoModelForSequenceClassification >>> ``` Can you offer a reproducible example? Thanks!<|||||>@LysandreJik Thank you for your reply ### Steps to reproduce the behavior: * Python: 3.6.8 * Ubuntu 16.04.6 * CUDA 10.0 ` pip install torch==1.2.0 torchvision==0.4.0` ` pip install transformers` ``` Python 3.6.8 (default, Jul 6 2021, 15:59:52) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import AutoModel Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'AutoModel' >>> from transformers import AutoModelForSequenceClassification Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'AutoModelForSequenceClassification' ```<|||||>The may be som error as I am getting: ``` Traceback (most recent call last): File "/home/usename/dev/speech_recognition.py", line 1, in <module> from transformers import Wav2Vec2Tokenizer, Wav2Vec2ForCTC ImportError: cannot import name 'Wav2Vec2ForCTC' from 'transformers' (unknown location) ``` The line of code: ``` from transformers import Wav2Vec2Tokenizer, Wav2Vec2ForCTC ``` Transformers: 4.3.0 Python 3.9.6 OS: Fedora 34<|||||>@JTWang2000 Recent versions of `transformers` are compatible with more recent PyTorch versions, see the [README](https://github.com/huggingface/transformers#with-pip). @kennylajara could you try upgrading `transformers` to a more recent version to see if it fixes your issue? (Wav2Vec2 has been improved since v4.3.0) - if not, do you mind linking me to a reproducible code example (with pip installations)/colab notebook or something similar that reproduces your issue?<|||||>**EDIT:** Well... **This** issue has been solved by updating dependencies but now it produces another issue. I will post the link to the reproducible issue in a minute.<|||||>@LysandreJik There you go: [https://github.com/kennylajara/speech-recognition](https://github.com/kennylajara/speech-recognition)<|||||>@LysandreJik Thank you for the reply. I reinstall PyTorch with version 1.7.0 and now it works. Thanks! <|||||>@kennylajara answered in a separate issue (please keep issues on different subjects separate, thank you): https://github.com/kennylajara/speech-recognition/issues/1<|||||>In my case I both installed pytorch and tensorflow (noob). Creating new environment just for pytorch help me solve it.
transformers
4,171
closed
Encoder/Decoder generation
Hello! I tried to train a Bert2Bert model for QA generation, however when I try the generate function it returns gibberish. I also tried using the example code below, and that also generated gibberish(the output is "[PAD] leon leon leon leon leonieieieieie shall shall shall shall shall shall shall shall shall"). Is the generate function supposed to work for EncoderDecoder models, and what am I doing wrong? ``` from transformers import EncoderDecoderModel, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id) ```
05-06-2020 06:59:39
05-06-2020 06:59:39
Are you using this exact line ``` model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert ``` If yes, then please use paths for your saved model. Few other things to try: verify data pipeline, try using beam search or sampling in generate<|||||>Thanks! I am using that exact line. I saved my trained model using save_pretrained() and it saved everything as one file. How would I separate this, or should I just retrain and re-save the encoder and decoder separately? Also, does the untrained model not work due to the untrained cross attention layer? <|||||>If you saved your model using `.save_pretrained` then you can load it using just `.from_pretrained` as you load any other HF model. Just pass the path of your saved model. You won't need to use `.from_encoder_decoder_pretrained`<|||||>Hi @anishthite, How did you train your Bert2Bert model? Can you post the code you used to train your model here? Dontt worry if it's a very long code snippet :-)<|||||>Hello! I managed to figure out the issue. I retrained and saved the encoder and decoder in their own folders. I then was able to load it in as @patil-suraj suggested. I guess earlier it was loading in the untrained model. Would it be helpful to redefine save_pretrained() for EncoderDecoder models to automatically split it into an encoder and decoder folder I can submit a PR if you want. ``` dataset = QADataset(dataset=args.traindataset, block_size=args.maxseqlen) qa_loader = DataLoader(dataset, batch_size=args.batch, shuffle=True) model.train() optimizer = AdamW(model.parameters(), lr=LEARNING_RATE) t_total = len(qa_loader) // args.gradient_acums * args.epochs scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=WARMUP_STEPS, num_training_steps = t_total) proc_seq_count = 0 sum_loss = 0.0 batch_count = 0 models_folder = "combinerslargeencoder" models_folder2 = "combinerslargedecoder" if not os.path.exists(models_folder): os.mkdir(models_folder) if not os.path.exists(models_folder2): os.mkdir(models_folder2) for epoch in range(args.epochs): print(f"EPOCH {epoch} started" + '=' * 30) for idx,qa in enumerate(qa_loader): print(str(idx) + ' ' + str(len(qa_loader))) inputs, labels = (qa[0], qa[1]) inputs = inputs.to(device) labels = labels.to(device) outputs = model(input_ids=inputs, decoder_input_ids=labels, lm_labels=labels) loss, logits = outputs[:2] loss = loss / args.gradient_acums loss.backward() sum_loss = sum_loss + loss.detach().data #proc_seq_count = proc_seq_count + 1 #if proc_seq_count == args.gradient_acums: # proc_seq_count = 0 batch_count += 1 if (idx + 1) % args.gradient_acums == 0: optimizer.step() scheduler.step() optimizer.zero_grad() model.zero_grad() if batch_count == 100: print(f"sum loss {sum_loss}") batch_count = 0 sum_loss = 0.0 # Store the model after each epoch to compare the performance of them torch.save(model.state_dict(), os.path.join(models_folder, f"combined_mymodel_{args.maxseqlen}{epoch}{args.gradient_acums}.pt")) model.save_pretrained(models_folder) model.encoder.save_pretrained(models_folder) model.decoder.save_pretrained(models_folder2) evaluate(args, model, tokenizer) ```<|||||>Why do you save the encoder and decoder model seperately?: ``` model.encoder.save_pretrained(models_folder) model.decoder.save_pretrained(models_folder2) ``` This line: ``` model.save_pretrained(models_folder) ``` should be enough. We moved away from saving the model to two separate folders, see: https://github.com/huggingface/transformers/pull/3383. Also the docs: https://huggingface.co/transformers/model_doc/encoderdecoder.html might be useful.
transformers
4,170
closed
How to position encode a sentence?
BertTokenizer provide us with input_ids, attention_mask and token_type_ids but i'm unable to get so called "Position ids". I have read the documentation as well but unable to find anything.
05-06-2020 05:14:39
05-06-2020 05:14:39
You don't need to provide position IDs to the model, the model will create them on its own. Similarly to the `attention_mask` and `token_type_ids`, you only need to provide them if you want them to be different to the default. In the case of position IDs, provide them if you want to use a special scheme/different to what was used when pre-training the model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,169
closed
Update __init__.py for AlbertMLMHead
https://github.com/huggingface/transformers/issues/4168
05-06-2020 04:27:33
05-06-2020 04:27:33
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>The issue was resolved.
transformers
4,168
closed
No name 'AlbertMLMHead' in module 'transformers'
# 🌟 New model addition ## Model description MLM model head ``` No name 'AlbertMLMHead' in module 'transformers' ``` ## Open source status * [x] the model implementation is available: ``` class AlbertMLMHead(nn.Module): def __init__(self, config): super().__init__() self.LayerNorm = nn.LayerNorm(config.embedding_size) self.bias = nn.Parameter(torch.zeros(config.vocab_size)) self.dense = nn.Linear(config.hidden_size, config.embedding_size) self.decoder = nn.Linear(config.embedding_size, config.vocab_size) self.activation = ACT2FN[config.hidden_act] # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` self.decoder.bias = self.bias def forward(self, hidden_states): hidden_states = self.dense(hidden_states) hidden_states = self.activation(hidden_states) hidden_states = self.LayerNorm(hidden_states) hidden_states = self.decoder(hidden_states) prediction_scores = hidden_states return prediction_scores [DOCS]@add_start_docstrings( "Albert Model with a `language modeling` head on top.", ALBERT_START_DOCSTRING, ) ``` Include in the __initial__ file and make it available for use. * [ ] the model weights are available: (give details) * [ ] who are the authors: (mention them, if possible by @gh-username)
05-06-2020 04:20:26
05-06-2020 04:20:26
Hi, you can import it as such: ```py from transformers.modeling_albert import AlbertMLMHead ``` It's not in the `__init.py__` as it's not a model but part of one.
transformers
4,167
closed
change order pytorch/tf in readme
05-05-2020 21:02:23
05-05-2020 21:02:23
transformers
4,166
closed
Tapas
# 🌟 New model addition ## Model descriptin Tapas extends Bert architecture and is a transformer-based Table QA model Paper: https://arxiv.org/abs/2004.02349 ## Open source status * [X] the model implementation is available: (give details) https://github.com/google-research/tapas * [X] the model weights are available: (give details) The pretrained weights and data for fine-tuning are linked in the repository readme * [x] who are the authors: @thomasmueller-google is one of the authors
05-05-2020 20:31:08
05-05-2020 20:31:08
hey, @thomwolf this would be a great addition! have a look into it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I think @NielsRogge is working on this over here: https://github.com/NielsRogge/transformers/tree/modeling_tapas
transformers
4,165
closed
[RFC] Sampling transform function for generation
See https://github.com/huggingface/transformers/issues/4164
05-05-2020 20:03:30
05-05-2020 20:03:30
(note that this will need to be refined to work with TF / beam search if we decide on the direction)<|||||>Closing in favor of #5416
transformers
4,164
closed
Add a sampling_transform callback to generation for arbitrary probability-warps
# 🚀 Feature request I'd like to add `sampling_transform` callback argument to all `generate` functions in `modeling_utils` that allows arbitrary sampling from the probability distribution during sequence generation. The signature of this function would be `(input_ids, next_probs, next_token) -> next_token`. ## Motivation The modeling_utils's generation function is getting pretty hairy -- including parameters for top p, top k, bad tokens, temperature, repetition penalty, etc.. Every new way of sampling means that we have to add more parameters and it further complicate the function. I believe the right way of solving this is to provide a function-argument that allows users to express arbitrary rules for the next sample. In the long run, we could replace all the other parameters with a set of pre-baked sampling_transform functions that can be composed at will. This method scales better to strange warps -- for example, in a project I'm working on (https://github.com/turtlesoupy/this-word-does-not-exist) I need to early-terminate sequences if they generate from a large set of bad tokens and need to continue generating if an EOS token is sampled too early. An example is here https://github.com/turtlesoupy/this-word-does-not-exist/blob/260e33a8f420b9be8b1e7260cb03c74d6231686e/title_maker_pro/datasets.py#L386 ## Your contribution I'll attach a sample PR that makes this work for pytorch and non-beam samples. If people like the idea, it should be easy to refine into a full PR that generalize to beam search and tensorflow.
05-05-2020 20:01:54
05-05-2020 20:01:54
Very interesting idea! I think we eventually have to make the generation function more general anyways. Maybe it's time to move this whole code: ```python # repetition penalty from CTRL paper (https://arxiv.org/abs/1909.05858) if repetition_penalty != 1.0: self.enforce_repetition_penalty_(next_token_logits, batch_size, 1, input_ids, repetition_penalty) if no_repeat_ngram_size > 0: # calculate a list of banned tokens to prevent repetitively generating the same ngrams # from fairseq: https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345 banned_tokens = calc_banned_ngram_tokens(input_ids, batch_size, no_repeat_ngram_size, cur_len) for batch_idx in range(batch_size): next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float("inf") if bad_words_ids is not None: # calculate a list of banned tokens according to bad words banned_tokens = calc_banned_bad_words_ids(input_ids, bad_words_ids) for batch_idx in range(batch_size): next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float("inf") # set eos token prob to zero if min_length is not reached if eos_token_id is not None and cur_len < min_length: next_token_logits[:, eos_token_id] = -float("inf") if do_sample: # Temperature (higher temperature => more likely to sample low probability tokens) if temperature != 1.0: next_token_logits = next_token_logits / temperature # Top-p/top-k filtering next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p) # Sample probs = F.softmax(next_token_logits, dim=-1) next_token = torch.multinomial(probs, num_samples=1).squeeze(1) if sampling_transform: next_token = sampling_transform(input_ids, probs, next_token) else: # Greedy decoding next_token = torch.argmax(next_token_logits, dim=-1) ``` even to a `Sampler` class with a `sampler.sample(input_ids, next_token_logits)` which can also include a generic function as proposed. What are your thoughts on this @yjernite @thomwolf @sshleifer @LysandreJik ?<|||||>Having a formal class sounds good to me; personally what I had in mind was a torchvision transforms type interface - so something like ``` Sampler.Compose([ temperature(2) , top_p(75), early_eos(), my_crazy_custom_transform(), ]) ``` The current operations are order-dependent which isn't necessarily apparent to the user in params. <|||||>@turtlesoupy that's definitely a feature we want! Would it be possible to apply the transform at the logit level rather than have it sample too? It seems like it fits the proposed use case and it would make beam search significantly easier.<|||||>@yjernite good call; the interface would have to be modified slightly to `(input_ids, next_probs, next_token) -> (next_probs)`. You might be able to fold the sampling procedure into the transforms. I know in my case I want to operate after a sample has been chosen ``` Sampler.Compose([ one_hot_draw(), my_crazy_custom_transform(), ]) ``` <|||||>@patrickvonplaten @yjernite finally got around to doing this -- can you take a look at #5420 and let me know if it can be merged? <|||||>Thanks a lot for the PR :-) I will take a look this week!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,163
closed
Cannot use camembert for question answering
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Camembert Language I am using the model on (English, Chinese ...): French The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: Squad * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Load camembert for Q&A 2. Use the script for Q&A from the HuggingFace Doc 3. Get a Runtimeerror <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> from [https://huggingface.co/transformers/usage.html#question-answering](https://huggingface.co/transformers/usage.html#question-answering) : Note : I tried with `camembert-base`, `illuin/camembert-base-fquad` and `fmikaelian/camembert-base-fquad` ``` from transformers import AutoTokenizer, CamembertForQuestionAnswering import torch tokenizer = AutoTokenizer.from_pretrained("camembert-base") model = CamembertForQuestionAnswering.from_pretrained("camembert-base") text = r"""Some text in french""" questions = ["Just one question in french"] for question in questions: inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors="pt") input_ids = inputs["input_ids"].tolist()[0] text_tokens = tokenizer.convert_ids_to_tokens(input_ids) answer_start_scores, answer_end_scores = model(**inputs) answer_start = torch.argmax( answer_start_scores ) # Get the most likely beginning of answer with the argmax of the score answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])) print(f"Question: {question}") print(f"Answer: {answer}\n") ``` It fails as well with the `pipeline` method ``` q_a_pipeline = pipeline("question-answering", model=model, tokenizer=tokenizer) q_a_pipeline({'question': question, 'context': text} ``` Stack trace : ``` Traceback (most recent call last): File "/home/covid_nlu/.local/lib/python3.8/site-packages/sanic/app.py", line 976, in handle_request response = await response File "test_server_sanic.py", line 72, in get_answer results = [q_a_pipeline({'question': question, 'context': doc}) File "test_server_sanic.py", line 72, in <listcomp> results = [q_a_pipeline({'question': question, 'context': doc}) File "/home/covid_nlu/.local/lib/python3.8/site-packages/transformers/pipelines.py", line 1109, in __call__ start, end = self.model(**fw_args) File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/covid_nlu/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 663, in forward outputs = self.roberta( File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/covid_nlu/.local/lib/python3.8/site-packages/transformers/modeling_bert.py", line 728, in forward embedding_output = self.embeddings( File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/covid_nlu/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py", line 64, in forward return super().forward( File "/home/covid_nlu/.local/lib/python3.8/site-packages/transformers/modeling_bert.py", line 175, in forward token_type_embeddings = self.token_type_embeddings(token_type_ids) File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 112, in forward return F.embedding( File "/home/covid_nlu/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1724, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Run the exemple in french as with Bert in english Note : I am able to run the exemple with a Hugging Face Pipeline (with all the different camembert model, comunity or not) ``` bert_tok = AutoTokenizer.from_pretrained("camembert-base") bert = CamembertForQuestionAnswering.from_pretrained("camembert-base") nlp = pipeline('question-answering', model=bert, tokenizer=bert_tok) answer = nlp({'question': "A question in french", 'context': a_big_string_in_french}) print(answer) ``` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: Linux-5.6.10-arch1-1-x86_64-with-glibc2.2.5 - Python version: 3.8.2 - PyTorch version (GPU?): 1.4.0 (True) (Same with 1.5) - Tensorflow version (GPU?): 2.2.0-rc4 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
05-05-2020 19:44:38
05-05-2020 19:44:38
transformers
4,162
closed
Add model card for the NER model
Add the model-card for a new NER model.
05-05-2020 18:49:36
05-05-2020 18:49:36
transformers
4,161
closed
run_generation.py - use of < redirect for input text file only reads first line - solution requested
Example: python3 examples/run_generation.py --model_type=gpt2 --model_name_or_path="models/custom-774M" --length=400 < test.txt > test_gpt2_400.txt Version 2.0.3 and earlier accepts a simple linux redirect < test.txt and reads the file line-by-line while generating the corresponding output to > test_gpt2_400_out.txt This works perfectly and does not require any changes. The test.txt is a simple 3 line text file with 1 sentence on each line. Versions 2.0.4 to 2.0.8 do not accept the < redirect in the same manner. The above only reads the first line of 'test.txt', generates, then exits. Output is the same for the first line. Is there a simple change to the 2.0.8 version of 'run_generation.py' that will return this useful terminal function under ubuntu 18.04? Have tried 'while - read' loops and only managed to generate the Model prompt >>> 3x requesting terminal input: exec 3<"test.txt" while IFS= read -r line <&3; do python3 examples/run_generation.py --model_type=gpt2 --model_name_or_path="models/custom-774M" --length=400 done > test_gpt2_400_out.txt Any help is appreciated.
05-05-2020 18:45:44
05-05-2020 18:45:44
I think in your case it might make more sense to use the `TextGenerationPipeline`. It's a two liner that does the same as the script. You can wrap a `open(inpt_text_file)` and `file.write()` around these two lines in your use case. See documentation for the generation pipeline here: https://huggingface.co/transformers/usage.html#text-generation<|||||>Closing for now
transformers
4,160
closed
[Marian] Multilingual models require language codes
[ ] figure out codes [ ] update/subclass `MarianTokenizer`
05-05-2020 17:49:08
05-05-2020 17:49:08
transformers
4,159
closed
Tokenizer.batch_decode convenience method
convenience method that turns the list returned by `model.generate` into a list of sentences.
05-05-2020 17:43:15
05-05-2020 17:43:15
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=h1) Report > Merging [#4159](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7822cd38a0e18004ab1a55bfe85e8b3bc0d8857a&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4159/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4159 +/- ## ======================================= Coverage 78.14% 78.15% ======================================= Files 120 120 Lines 20053 20053 ======================================= + Hits 15670 15672 +2 + Misses 4383 4381 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4159/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `87.32% <ø> (-0.35%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4159/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.51% <100.00%> (+0.02%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4159/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.60% <0.00%> (+0.12%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4159/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=footer). Last update [7822cd3...cc21682](https://codecov.io/gh/huggingface/transformers/pull/4159?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I'm in favor of this since we also have a `batch_encode_plus` method. <|||||>One small thing: the `encode_plus` method uses the `batch_encode_plus` whereas here it's the other way around. But this does not bother me too much.<|||||>I don't have a strong knowledge of the API design for tokenizers so will let others chime in instead.
transformers
4,158
closed
[Marian] @-@ symbol causes strange generations
good fr-en test case: ``` Veuillez m' apporter une demi @-@ bouteille de vin . ```
05-05-2020 17:31:13
05-05-2020 17:31:13
This is only an issue for BPE models, which we are not supporting.
transformers
4,157
closed
[Marian] Readme parser defaults to porting oldest model
- logic should be newest model or all models with unique names. - the download URL (and probably other metadata) should be put in a `model_cards/` file.
05-05-2020 17:30:17
05-05-2020 17:30:17
Not true, fixed on master.
transformers
4,156
closed
Removed the use of deprecated Variable API in PPLM example
05-05-2020 17:02:24
05-05-2020 17:02:24
Could you please tell what does this mean ? ``` #!/bin/bash -eo pipefail black --check --line-length 119 --target-version py35 examples templates tests src utils would reformat /home/circleci/transformers/examples/pplm/run_pplm.py Oh no! 💥 💔 💥 1 file would be reformatted, 266 files would be left unchanged. Exited with code exit status 1 ``` <|||||>You need to run `make style` and `make quality` locally and push the resulting changes<|||||>Also this file was moved around so you'll need to re-apply your changes to new location<|||||>I ran both make commands as you said. ``` (base) u37216@s001-n007:~/transformers$ make style black --line-length 119 --target-version py35 examples templates tests src utils reformatted /home/u37216/transformers/src/transformers/__init__.py reformatted /home/u37216/transformers/templates/adding_a_new_example_script/run_xxx.py reformatted /home/u37216/transformers/templates/adding_a_new_example_script/utils_xxx.py All done! ✨ 🍰 ✨ 3 files reformatted, 272 files left unchanged. isort --recursive examples templates tests src utils Fixing /home/u37216/transformers/templates/adding_a_new_example_script/run_xxx.py Fixing /home/u37216/transformers/templates/adding_a_new_example_script/utils_xxx.py Fixing /home/u37216/transformers/src/transformers/__init__.py (base) u37216@s001-n007:~/transformers$ make quality black --check --line-length 119 --target-version py35 examples templates tests src utils would reformat /home/u37216/transformers/src/transformers/__init__.py would reformat /home/u37216/transformers/templates/adding_a_new_example_script/run_xxx.py would reformat /home/u37216/transformers/templates/adding_a_new_example_script/utils_xxx.py Oh no! 💥 💔 💥 3 files would be reformatted, 272 files would be left unchanged. Makefile:6: recipe for target 'quality' failed make: *** [quality] Error 1 ``` Is this related to my code ? I didn't even touch other parts of the file. I think the formatting issue came from master itself (I checked it). <|||||>@julien-c Can you please see this ?
transformers
4,155
closed
num_samples=0 when using pretrained model
When training the "run_language_modelling.py" example with a pretrained bert model, I get the error: """ Traceback (most recent call last): File "run_language_modeling.py", line 284, in <module> main() File "run_language_modeling.py", line 254, in main trainer.train(model_path=model_path) File "C:\Users\AppData\Roaming\Python\Python38\site-packages\transformers\trainer.py", line 243, in train train_dataloader = self.get_train_dataloader() File "C:\Users\AppData\Roaming\Python\Python38\site-packages\transformers\trainer.py", line 179, in get_train_dataloader RandomSampler(self.train_dataset) if self.args.local_rank == -1 else DistributedSampler(self.train_dataset) File "C:\Users\Anaconda3\envs\kompetence\lib\site-packages\torch\utils\data\sampler.py", line 93, in __init__ raise ValueError("num_samples should be a positive integer " ValueError: num_samples should be a positive integer value, but got num_samples=0 """ The cached_lm_berttokenizer that gets saved to the data folder looks like this: """ €]”. """ In the folder for my bert model, I have a config.json, pytorch_model.bin, special_tokens_map.json, vocab.txt, tokenizer_config.json. I run the following command python run_language_modeling.py --output_dir=. --model_type=bert --model_name_or_path="H:\danish_bert_uncased_v2\\" --mlm --train_data_file="H:\\data\train.txt" --eval_data_file="H:\data\test.txt" --do_eval --do_train --overwrite_output_dir -block_size=1000 Is anything missing here? The data is composed of sentences seperated by newlines and an empty line between documents.
05-05-2020 14:50:50
05-05-2020 14:50:50
Hi, did you find out the reason for this error? I get the same error even on Wikitext-2 corpus <|||||>Yeah, it is an issue with how your data is read. Try the --line_by_line argument.
transformers
4,154
closed
Rewritten batch support in pipelines.
Batch support in Pipeline was confusing and not well tested. This PR rewrites all the content of `DefaultArgumentHandler` which handles most of the input conversions (args, kwargs, batched, etc.) and brings unit tests on this specific class. Signed-off-by: Morgan Funtowicz <[email protected]>
05-05-2020 14:42:24
05-05-2020 14:42:24
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=h1) Report > Merging [#4154](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/818463ee8eaf3a1cd5ddc2623789cbd7bb517d02&el=desc) will **increase** coverage by `0.03%`. > The diff coverage is `96.42%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4154/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4154 +/- ## ========================================== + Coverage 78.79% 78.83% +0.03% ========================================== Files 114 114 Lines 18711 18726 +15 ========================================== + Hits 14743 14762 +19 + Misses 3968 3964 -4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4154/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.27% <96.42%> (+1.32%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4154/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=footer). Last update [818463e...95a257c](https://codecov.io/gh/huggingface/transformers/pull/4154?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ok so this should be ready to merge, right @mfuntowicz?<|||||>+1 Would love a merge here so I can more easily improve `TranslationPipeline`
transformers
4,153
closed
Embedding index getting out of range while running camemebert model
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Camembert Language I am using the model on (English, Chinese ...): French The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Take a file with french text 2. Load pretrained Camembert Model and tokenizer as in the doc 3. Run inference <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> * Initialisation : `bert = CamembertModel.from_pretrained("camembert-base")` `bert_tok = CamembertTokenizer.from_pretrained("camembert-base")` * Inference : like [https://huggingface.co/transformers/usage.html#question-answering](https://huggingface.co/transformers/usage.html#question-answering) ``` inputs = bert_tok.encode_plus(question, context, add_special_tokens=True, return_tensors="pt") input_ids = inputs["input_ids"].tolist()[0] text_tokens = bert_tok.convert_ids_to_tokens(input_ids) answer_start_scores, answer_end_scores = bert(**inputs) ``` It works by removing the context argument (text_pair argument) but I need it to do question answering with other models and it lead to the same error with pipelines * Stack trace : ``` IndexError Traceback (most recent call last) <ipython-input-9-73762e6cf69b> in <module> 2 for utterances in file.readlines(): 3 input_tensor = bert_tok.batch_encode_plus([utterances], pad_to_max_length=True, return_tensors="pt") ----> 4 last_hidden, pool = bert(input_tensor["input_ids"], input_tensor["attention_mask"]) 5 6 ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 functools.update_wrapper(wrapper, hook) 549 grad_fn.register_hook(wrapper) --> 550 return result 551 552 def __setstate__(self, state): ~/.local/lib/python3.8/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask) 780 head_mask = [None] * self.config.num_hidden_layers 781 --> 782 embedding_output = self.embeddings( 783 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds 784 ) ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 functools.update_wrapper(wrapper, hook) 549 grad_fn.register_hook(wrapper) --> 550 return result 551 552 def __setstate__(self, state): ~/.local/lib/python3.8/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 62 position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds) 63 ---> 64 return super().forward( 65 input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds 66 ) ~/.local/lib/python3.8/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 172 if inputs_embeds is None: 173 inputs_embeds = self.word_embeddings(input_ids) --> 174 position_embeddings = self.position_embeddings(position_ids) 175 token_type_embeddings = self.token_type_embeddings(token_type_ids) 176 ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 functools.update_wrapper(wrapper, hook) 549 grad_fn.register_hook(wrapper) --> 550 return result 551 552 def __setstate__(self, state): ~/.local/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input) 110 111 def forward(self, input): --> 112 return F.embedding( 113 input, self.weight, self.padding_idx, self.max_norm, 114 self.norm_type, self.scale_grad_by_freq, self.sparse) ~/.local/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1722 if dim == 3: 1723 div = pad(div, (0, 0, size // 2, (size - 1) // 2)) -> 1724 div = avg_pool2d(div, (size, 1), stride=1).squeeze(1) 1725 else: 1726 sizes = input.size() IndexError: index out of range in self ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Run inference without any error ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> `transformers` version: 2.8.0 - Platform: Linux-5.6.10-arch1-1-x86_64-with-glibc2.2.5 - Python version: 3.8.2 - PyTorch version (GPU?): 1.4.0 (True) (Same with 1.5) - Tensorflow version (GPU?): 2.2.0-rc4 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
05-05-2020 14:40:31
05-05-2020 14:40:31
I am running into the same error on my own script. Interestingly it only appears on CPU... Did you find a solution?<|||||>No, I want to get a French Q&A pipeline, surprinsingly, with the hugging face pipeline everything works great, I can plug the code in a local server and make requests on it. But when I try to use the same code in a docker envrionement to ship it, it fail with this error (only in french with camembert, classic bert works fine) I get the error locally as well if I try not to use the hugging face pipeline but write my own inference (as described above)<|||||>I can confirm it's working on GPU local (and even in a docker) but still stuck on CPU<|||||>I actually figured out my error. I was adding special tokens to the tokenizer (like begin-of-sequence) but did not resize the models token embeddings via: `model.resize_token_embeddings(len(self.tokenizer))` Just in case someone else is not reading the documentation carefully enough :see_no_evil: Considering that, the error message did actually make sense. <|||||>Hi @Ierezell, there is indeed an issue which I'm patching in #4289. Please be aware that you're using `CamembertModel` which cannot be used for question answering. Please use `CamembertForQuestionAnswering` instead.<|||||>It's patched now, please install from source and there should be no error anymore!<|||||>Hi @LysandreJik, I'm concious that I used it with a non QA model but it was to try the base model supported by hugging face. I tried as well with `illuin/camembert-base-fquad` (large as well) and with `fmikaelian/camembert-base-fquad` I will install the latest version and try it. Thnaks a lot for the fast support !<|||||>I tried your fix but it lead to key errors : ``` File "/home/pedro/.local/lib/python3.8/site-packages/transformers/pipelines.py", line 1156, in __call__ answers += [ File "/home/pedro/.local/lib/python3.8/site-packages/transformers/pipelines.py", line 1159, in <listcomp> "start": np.where(char_to_word == feature.token_to_orig_map[s])[0][0].item(), KeyError: 0 ```<|||||>Could you provide a reproducible script? I can't reproduce.<|||||>My problem here was surely linked with #4674 everything seems to work now, thanks a lot
transformers
4,152
closed
[Marian] documentation and AutoModel support
- Adds integration tests for en-fr, fr-en. - Easier bulk conversion - remove unused pretrained_model_archive_map constant. - boilerplate to make AutoModelWithLMHead, AutoTokenizer, AutoConfig work. ### Metrics: For [fr-en test set](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.test.txt): - BLEU score from posted translations: 57.4979 - BLEU score from `MarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-fr-en')`: 57.4817 - no perf change for fp16 - can fit batch_size=512 on 16GB card in fp16 - speed: 89s for 5k examples = 56 examples/second
05-05-2020 12:53:00
05-05-2020 12:53:00
### Screenshots of Documentation ![image](https://user-images.githubusercontent.com/6045025/81184705-b8f5f700-8f7e-11ea-896f-9571d542cb32.png) ![image](https://user-images.githubusercontent.com/6045025/81184741-c317f580-8f7e-11ea-96f2-a38c89224f7b.png) [ x] Shows up in TOC: ![image](https://user-images.githubusercontent.com/6045025/81184776-cd39f400-8f7e-11ea-8420-cf5f34eac1d7.png) <|||||>The documentation looks great!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=h1) Report > Merging [#4152](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a01a3fecb4fe203c086d3d7450b76bbcfa035725&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4152/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4152 +/- ## ======================================= Coverage 77.52% 77.52% ======================================= Files 120 120 Lines 19932 19932 ======================================= Hits 15453 15453 Misses 4479 4479 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=footer). Last update [a01a3fe...a01a3fe](https://codecov.io/gh/huggingface/transformers/pull/4152?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,151
closed
How to pre-train BART model
How to pre-train BART model in an unsupervised manner. any example?
05-05-2020 10:00:43
05-05-2020 10:00:43
We still need to provide a good docstring/notebook for this. It's on our ToDo-List. :-) Or @sshleifer - is there already something for Bart? <|||||>Nothing yet, would be good to add!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I have seen the same [issue](https://github.com/pytorch/fairseq/issues/1899) in fairseq BART!.<|||||>Hi, any news about bart pre-training? <|||||> who can tell me how to pre-train the bart on my own dataset? I am so confused .... thank you so much<|||||>Maybe this comment can help: https://github.com/huggingface/transformers/issues/5096#issuecomment-645860271<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Any news on this please? <|||||>not so far, would be great to have it. Thanks.<|||||>I and my co-worker wrote a demo according to roberta pretraining demo. ``` #encoding=utf-8 from transformers import ( BartForConditionalGeneration, BartTokenizer, BartForCausalLM, Seq2SeqTrainingArguments, Seq2SeqTrainer ) import torch from torch.utils.data import random_split # ## Initiating model and trainer for training from transformers import BartModel, BartConfig from transformers import BartTokenizerFast configuration = BartConfig( vocab_size=52000, max_position_embeddings=258, d_model=256, encoder_layers=3, decoder_layers=3, encoder_attention_heads=4, decoder_attention_heads=4, decoder_ffn_dim=1024, encoder_ffn_dim=1024, ) model = BartForCausalLM(configuration) tokenizer = BartTokenizerFast.from_pretrained("./dic", max_len=256, additional_special_tokens=['[CH]', '[OTHER]', '[VAR]', '[NUM]']) # ### HTTP Request DataPreparing & Modeling data = [] with open("../data/sample.txt") as f1: for src in f1: data.append( { "seq2seq": { "input": src.strip() } } ) print(f'total size of data is {len(data)}') # splitting dataset into train, validation split = 0.2 train_dataset, eval_dataset = random_split(data, lengths=[int((1-split)*len(data))+1, int(split*len(data))]) # defining collator functioon for preparing batches on the fly .. def data_collator(features:list): inputs = [f["seq2seq"]["input"] for f in features] batch = tokenizer.prepare_seq2seq_batch(src_texts=inputs, max_length=256, padding='max_length') batch["labels"] = batch["input_ids"].copy() for k in batch: batch[k] = torch.tensor(batch[k]) return batch batch_out = data_collator(eval_dataset) print(batch_out) print(batch_out['input_ids'].shape,batch_out['labels'].shape,batch_out['attention_mask'].shape) # defining training related arguments args = Seq2SeqTrainingArguments(output_dir="clm-checkpoints", do_train=True, do_eval=True, evaluation_strategy="epoch", per_device_train_batch_size=8, per_device_eval_batch_size=8, learning_rate=5e-5, num_train_epochs=1, logging_dir="./logs") # defining trainer using 🤗 trainer = Seq2SeqTrainer(model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset) # ## Training time trainer.train() # It will take hours to train this model on this dataset # lets save model trainer.evaluate(eval_dataset=eval_dataset) trainer.save_model("clm-checkpoints") ```<|||||>> I and my co-worker wrote a demo according to roberta pretraining demo. > > ``` > #encoding=utf-8 > > from transformers import ( > BartForConditionalGeneration, BartTokenizer, BartForCausalLM, > Seq2SeqTrainingArguments, Seq2SeqTrainer > ) > > import torch > from torch.utils.data import random_split > > > # ## Initiating model and trainer for training > from transformers import BartModel, BartConfig > from transformers import BartTokenizerFast > > > configuration = BartConfig( > vocab_size=52000, > max_position_embeddings=258, > d_model=256, > encoder_layers=3, > decoder_layers=3, > encoder_attention_heads=4, > decoder_attention_heads=4, > decoder_ffn_dim=1024, > encoder_ffn_dim=1024, > ) > model = BartForCausalLM(configuration) > tokenizer = BartTokenizerFast.from_pretrained("./dic", max_len=256, additional_special_tokens=['[CH]', '[OTHER]', '[VAR]', '[NUM]']) > > > # ### HTTP Request DataPreparing & Modeling > data = [] > with open("../data/sample.txt") as f1: > for src in f1: > data.append( > { > "seq2seq": { > "input": src.strip() > } > } > ) > print(f'total size of data is {len(data)}') > > > # splitting dataset into train, validation > split = 0.2 > train_dataset, eval_dataset = random_split(data, lengths=[int((1-split)*len(data))+1, int(split*len(data))]) > > > # defining collator functioon for preparing batches on the fly .. > def data_collator(features:list): > inputs = [f["seq2seq"]["input"] for f in features] > batch = tokenizer.prepare_seq2seq_batch(src_texts=inputs, max_length=256, padding='max_length') > batch["labels"] = batch["input_ids"].copy() > for k in batch: > batch[k] = torch.tensor(batch[k]) > return batch > > > batch_out = data_collator(eval_dataset) > print(batch_out) > print(batch_out['input_ids'].shape,batch_out['labels'].shape,batch_out['attention_mask'].shape) > > > # defining training related arguments > args = Seq2SeqTrainingArguments(output_dir="clm-checkpoints", > do_train=True, > do_eval=True, > evaluation_strategy="epoch", > per_device_train_batch_size=8, > per_device_eval_batch_size=8, > learning_rate=5e-5, > num_train_epochs=1, > logging_dir="./logs") > > > # defining trainer using 🤗 > trainer = Seq2SeqTrainer(model=model, > args=args, > data_collator=data_collator, > train_dataset=train_dataset, > eval_dataset=eval_dataset) > > > # ## Training time > trainer.train() > # It will take hours to train this model on this dataset > > > # lets save model > trainer.evaluate(eval_dataset=eval_dataset) > trainer.save_model("clm-checkpoints") > ``` Thanks for the code example, I am also planning on implementing pretrained from scratch, and I've got several questions for the code - I noticed that you use pretrained bart tokenizer, how can I pretrain it for different language? - How much compute did you use for your implementation?<|||||>> > I and my co-worker wrote a demo according to roberta pretraining demo. > > ``` > > #encoding=utf-8 > > > > from transformers import ( > > BartForConditionalGeneration, BartTokenizer, BartForCausalLM, > > Seq2SeqTrainingArguments, Seq2SeqTrainer > > ) > > > > import torch > > from torch.utils.data import random_split > > > > > > # ## Initiating model and trainer for training > > from transformers import BartModel, BartConfig > > from transformers import BartTokenizerFast > > > > > > configuration = BartConfig( > > vocab_size=52000, > > max_position_embeddings=258, > > d_model=256, > > encoder_layers=3, > > decoder_layers=3, > > encoder_attention_heads=4, > > decoder_attention_heads=4, > > decoder_ffn_dim=1024, > > encoder_ffn_dim=1024, > > ) > > model = BartForCausalLM(configuration) > > tokenizer = BartTokenizerFast.from_pretrained("./dic", max_len=256, additional_special_tokens=['[CH]', '[OTHER]', '[VAR]', '[NUM]']) > > > > > > # ### HTTP Request DataPreparing & Modeling > > data = [] > > with open("../data/sample.txt") as f1: > > for src in f1: > > data.append( > > { > > "seq2seq": { > > "input": src.strip() > > } > > } > > ) > > print(f'total size of data is {len(data)}') > > > > > > # splitting dataset into train, validation > > split = 0.2 > > train_dataset, eval_dataset = random_split(data, lengths=[int((1-split)*len(data))+1, int(split*len(data))]) > > > > > > # defining collator functioon for preparing batches on the fly .. > > def data_collator(features:list): > > inputs = [f["seq2seq"]["input"] for f in features] > > batch = tokenizer.prepare_seq2seq_batch(src_texts=inputs, max_length=256, padding='max_length') > > batch["labels"] = batch["input_ids"].copy() > > for k in batch: > > batch[k] = torch.tensor(batch[k]) > > return batch > > > > > > batch_out = data_collator(eval_dataset) > > print(batch_out) > > print(batch_out['input_ids'].shape,batch_out['labels'].shape,batch_out['attention_mask'].shape) > > > > > > # defining training related arguments > > args = Seq2SeqTrainingArguments(output_dir="clm-checkpoints", > > do_train=True, > > do_eval=True, > > evaluation_strategy="epoch", > > per_device_train_batch_size=8, > > per_device_eval_batch_size=8, > > learning_rate=5e-5, > > num_train_epochs=1, > > logging_dir="./logs") > > > > > > # defining trainer using 🤗 > > trainer = Seq2SeqTrainer(model=model, > > args=args, > > data_collator=data_collator, > > train_dataset=train_dataset, > > eval_dataset=eval_dataset) > > > > > > # ## Training time > > trainer.train() > > # It will take hours to train this model on this dataset > > > > > > # lets save model > > trainer.evaluate(eval_dataset=eval_dataset) > > trainer.save_model("clm-checkpoints") > > ``` > > Thanks for the code example, I am also planning on implementing pretrained from scratch, and I've got several questions for the code > > * I noticed that you use pretrained bart tokenizer, how can I pretrain it for different language? > * How much compute did you use for your implementation? For the first question, just like this: ``` from tokenizers import (ByteLevelBPETokenizer,SentencePieceBPETokenizer,BertWordPieceTokenizer) tokenizer = ByteLevelBPETokenizer() paths = ['./data/corpus.txt'] tokenizer.train(files=paths, vocab_size = 15000, min_frequency=6, special_tokens = ["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) tokenizer.save_model("./data/dic/") ``` For the other question, i trained it with 12G gpu memory,but it may be completed with samller gpu memory. And also you could adjust you parameters to your server environment.<|||||>@myechona Thanks for your code. I have a question about it. There are some tasks like text-filling and sentence-permutation during pretrain stage, i want to know whether the "input_ids" is for masked sentence and the "labels" is for origin sentence?<|||||>If anyone wants to train their MBART model then feel free to use this. https://github.com/prajdabre/yanmtt Contributions are welcome!<|||||>> I and my co-worker wrote a demo according to roberta pretraining demo. > > ``` > #encoding=utf-8 > > from transformers import ( > BartForConditionalGeneration, BartTokenizer, BartForCausalLM, > Seq2SeqTrainingArguments, Seq2SeqTrainer > ) > > import torch > from torch.utils.data import random_split > > > # ## Initiating model and trainer for training > from transformers import BartModel, BartConfig > from transformers import BartTokenizerFast > > > configuration = BartConfig( > vocab_size=52000, > max_position_embeddings=258, > d_model=256, > encoder_layers=3, > decoder_layers=3, > encoder_attention_heads=4, > decoder_attention_heads=4, > decoder_ffn_dim=1024, > encoder_ffn_dim=1024, > ) > model = BartForCausalLM(configuration) > tokenizer = BartTokenizerFast.from_pretrained("./dic", max_len=256, additional_special_tokens=['[CH]', '[OTHER]', '[VAR]', '[NUM]']) > > > # ### HTTP Request DataPreparing & Modeling > data = [] > with open("../data/sample.txt") as f1: > for src in f1: > data.append( > { > "seq2seq": { > "input": src.strip() > } > } > ) > print(f'total size of data is {len(data)}') > > > # splitting dataset into train, validation > split = 0.2 > train_dataset, eval_dataset = random_split(data, lengths=[int((1-split)*len(data))+1, int(split*len(data))]) > > > # defining collator functioon for preparing batches on the fly .. > def data_collator(features:list): > inputs = [f["seq2seq"]["input"] for f in features] > batch = tokenizer.prepare_seq2seq_batch(src_texts=inputs, max_length=256, padding='max_length') > batch["labels"] = batch["input_ids"].copy() > for k in batch: > batch[k] = torch.tensor(batch[k]) > return batch > > > batch_out = data_collator(eval_dataset) > print(batch_out) > print(batch_out['input_ids'].shape,batch_out['labels'].shape,batch_out['attention_mask'].shape) > > > # defining training related arguments > args = Seq2SeqTrainingArguments(output_dir="clm-checkpoints", > do_train=True, > do_eval=True, > evaluation_strategy="epoch", > per_device_train_batch_size=8, > per_device_eval_batch_size=8, > learning_rate=5e-5, > num_train_epochs=1, > logging_dir="./logs") > > > # defining trainer using 🤗 > trainer = Seq2SeqTrainer(model=model, > args=args, > data_collator=data_collator, > train_dataset=train_dataset, > eval_dataset=eval_dataset) > > > # ## Training time > trainer.train() > # It will take hours to train this model on this dataset > > > # lets save model > trainer.evaluate(eval_dataset=eval_dataset) > trainer.save_model("clm-checkpoints") > ``` Thanks for your code, it really helps.<|||||>I'm most interested in sentence infilling, which this script doesn't really seem to address (though my understanding was that BART training generally involves masking and permutation). Is there an additional step I need to add for the infilling functionality?<|||||> > We still need to provide a good docstring/notebook for this. It's on our ToDo-List. :-) > > Or @sshleifer - is there already something for Bart? Hi, any update on this? @vanpelt <|||||>I actually decided to jump over to T5 and use the `run_t5_mlm_flax.py` script. Seems to be working so far, though it's very new, so missing some conveniences.... it sounds like that stuff is underway!<|||||>> I actually decided to jump over to T5 and use the `run_t5_mlm_flax.py` script. Seems to be working so far, though it's very new, so missing some conveniences.... it sounds like that stuff is underway! Great, I was initially looking at those scripts to get some ideas about the pre-training script, but since then thought the Huggingface guys might have come up with a resource to do this. Apparently, it's still underway! :) <|||||>We've released [nanoT5](https://github.com/PiotrNawrot/nanoT5) that reproduces T5-model (similar to BART) pre-training in PyTorch (not Flax). You can take a look! Any suggestions are more than welcome.
transformers
4,150
closed
Config File
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hi everybody, is there a way to automatically obtain the configuration file for pre-trained TensorFlow T5 network? Specifically the config.json file which I want to use to convert a tf model into pytorch using the script provided by HuggingFace. Just to be more clear, I want something like that: https://s3.amazonaws.com/models.huggingface.co/bert/t5-small-config.json, with the specifications for my model. Thanks in advance :)
05-05-2020 08:13:45
05-05-2020 08:13:45
You get the default config file as written on AWS with ```python config = T5Config.from_pretrained("t5-small") ``` If you want to change anything on the config after you can update it with a dict of your specifications: ```python config.update(your_dict) ```<|||||>Hello @patrickvonplaten ! I have tried to update the config file using new dict ``` new_dict={ "d_ff": 2048, "d_kv": 64, "d_model": 512, "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "relu", "initializer_factor": 1.0, "is_encoder_decoder": True, "layer_norm_epsilon": 1e-06, "model_type": "t5", "num_decoder_layers": 6, "num_heads": 8, "num_layers": 6, "pad_token_id": 0, "relative_attention_num_buckets": 32, "transformers_version": "4.5.1", "use_cache": False, "vocab_size": 32128, "num_input_sent": 4, # TODO: ADDED "max_source_len ": 25, # TODO: ADDED "max_target_len": 25, # TODO: ADDED } ``` print(onfig.d_ff) gives 2048 but: print(max_source_len) gives me an error "AttributeError: 'T5Config' object has no attribute 'max_source_len'" how to use the new added configurations after they are added?<|||||>Hey @Arij-Aladel, What would be the purpose of adding parameters to the config, such as `max_target_len` that are not used in the corresponding model `T5Model`?<|||||>@patrickvonplaten I am trying to build my model depending on t5 hugging face API and I need to build my own config. Whatever the purpose is , the question still what is th benefit of updating the parameters if I can not reuse them?<|||||>Ok thanks I have solved it it was my bad, so sorry! every thing works well
transformers
4,149
closed
Fine-tuning T5 in Tensorflow
I wonder whether there is an example/tutorial about how to fine-tune T5 for a particular task (say translation) in Tensorflow. On the doc website it says that `This model is a tf.keras.Model tf.keras.Model sub-class`. However I am not really sure what the output is for the Keras model. It seems like the output (the target sequence, in the case of translation) are considered input for the T5 model as well. Thank you in advance.
05-05-2020 00:41:44
05-05-2020 00:41:44
Hi @tqdo, Good question. We will add better explanation for TF T5 soon :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@tqdo. I open sourced fine-tuning T5 by customizing training loop in Tensorflow2.0+: https://github.com/wangcongcong123/ttt Hope this helps.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,148
closed
DistilBertForQuestionAnswering returns [UNK]
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...):Distilbert Language I am using the model on (English, Chinese ...):English The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Distilber for question answering is return a sentence of [UNK] Steps to reproduce the behavior: Code snippet I am using : ```python import torch from transformers import DistilBertTokenizer, DistilBertForQuestionAnswering tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased-distilled-squad') model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad') torch_device = "cuda" if torch.cuda.is_available() else "cpu" def answer(question, text): input_dict = tokenizer.encode_plus( question, text, return_tensors='pt', max_length=512) input_ids = input_dict["input_ids"].to('cpu') start_scores, end_scores = model(input_ids) start = torch.argmax(start_scores) end = torch.argmax(end_scores) all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0]) answer = ''.join(all_tokens[start: end + 1]).replace('▁', ' ').strip() answer = answer.replace('[SEP]', '') return answer if answer != '[CLS]' and len(answer) != 0 else 'could not find an answer' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Using the Albert is giving a good results, but DistilBert tokenizer is not working. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.2.1 - Platform: Linux 18.04 - Python version: 3.6.9 - PyTorch version (GPU?): pytorch-pretrained-bert==0.6.2 - Tensorflow version (GPU?):NO - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
05-04-2020 23:22:37
05-04-2020 23:22:37
Solved : Had to update transformers & torch this code is working fine : ```python from transformers import DistilBertTokenizer, DistilBertForQuestionAnswering import torch tokenizer = DistilBertTokenizer.from_pretrained( 'distilbert-base-uncased-distilled-squad') model = DistilBertForQuestionAnswering.from_pretrained( 'distilbert-base-uncased-distilled-squad') input_ids = torch.tensor(tokenizer.encode( question, corpus, max_length=256, add_special_tokens=True)).unsqueeze(0) # Batch size 1 start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions) loss, start_scores, end_scores = outputs[:3] all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0]) answer = tokenizer.convert_tokens_to_string( all_tokens[torch.argmax(start_scores): torch.argmax(end_scores) + 1]) print('*************') print(answer) ```
transformers
4,147
closed
Error in Calculating Sentence Perplexity for GPT-2 model
Hi, I am using a following code to calculate the perplexity of sentences on my GPT-2 pretrained model: ``` tokenizer = GPT2Tokenizer.from_pretrained('gpt-model') config = GPT2Config.from_pretrained('gpt-model') model = GPT2LMHeadModel.from_pretrained('gpt-model', config=config) model.eval() def calculatePerplexity(sentence,model,tokenizer): input_ids = torch.tensor(tokenizer.encode(sentence)).unsqueeze(0) input_ids = input_ids.to('cpu') with torch.no_grad(): outputs = model(input_ids, labels=input_ids) loss, logits = outputs[:2] return math.exp(loss) ``` For some of the sentences from my testing corpus, I am getting following error: **Token indices sequence length is longer than the specified maximum sequence length for this model (1140 > 1024). Running this sequence through the model will result in indexing errors** How can I resolve this error? Kindly advise.
05-04-2020 18:23:57
05-04-2020 18:23:57
The longest input length a pretrained GPT2 model can treat depends on its `n_position` value. You can look it up here e.g. https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json . If you use a pretrained-model you sadly can only treat sequences <= 1024. For you own model you can increase `n_position` and retrain the longer position encoding matrix this way. If you are just interested in the perplexity you could also simply cut the input_ids into smaller input_ids and average the loss over them. It will not exactly be the same, but a good approximation.<|||||>Is it being calculated in the same way for the evaluation of training on validation set? Secondly, if we calculate perplexity of all the individual sentences from corpus "xyz" and take average perplexity of these sentences? will it be the same by calculating the perplexity of the whole corpus by using parameter "eval_data_file" in language model script?<|||||>1) Yes 2) No -> since you don't take into account the probability `p(first_token_sentence_2 | last_token_sentence_1)`, but it will be a very good approximation. Hope this answers your question
transformers
4,146
closed
Tpu trainer
This PR aims to bring TPU support to the trainer. It runs on GLUE/MRPC but is yet untested on others. Left to do: - [ ] Saving and reloading mid-training - [ ] Check it runs on a few examples (Language modeling, NER, other GLUE tasks) - [ ] Write down training speed-ups - [ ] Write down evaluation speed-ups The API is not final.
05-04-2020 15:35:19
05-04-2020 15:35:19
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=h1) Report > Merging [#4146](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d713cfc5ebfb1ed83de1fce55dd7279f9db30672&el=desc) will **decrease** coverage by `0.08%`. > The diff coverage is `33.33%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4146/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4146 +/- ## ========================================== - Coverage 78.84% 78.76% -0.09% ========================================== Files 114 114 Lines 18688 18729 +41 ========================================== + Hits 14735 14751 +16 - Misses 3953 3978 +25 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `42.27% <25.58%> (-1.64%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `84.21% <63.63%> (-3.49%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=footer). Last update [d713cfc...e882471](https://codecov.io/gh/huggingface/transformers/pull/4146?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome!
transformers
4,145
closed
tokenizer.batch_encoder_plus do not return input_len
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ``` batch_tokens = self.tokenizer.batch_encode_plus( [example.text_a for example in examples], max_length=max_seq_len, pad_to_max_length=True, ) for ex_id, (input_ids, segment_ids, input_mask) in enumerate(zip(batch_tokens["input_ids"], batch_tokens["token_type_ids"], batch_tokens["attention_mask"])): input_len = sum(1 for id in input_ids if id != self.tokenizer.pad_token_id) ``` if return input_len, it will be great. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
05-04-2020 08:35:10
05-04-2020 08:35:10
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,144
closed
Use finetuned-BART large to do conditional generation
Hi I am using a slightly old tag of ur repo where BART had run_bart_sum.py. I finetuned bart-large on a custom data set and want to do conditional generation ``` from transformers import BartTokenizer, BartForConditionalGeneration import torch model = BartForConditionalGeneration.from_pretrained('bart-large') tokenizer = BartTokenizer.from_pretrained('bart-large') ARTICLE_TO_SUMMARIZE = "President Donald Trump's senior adviser and son-in-law, Jared Kushner, praised the administration's response to the coronavirus pandemic as a \"great success story\" on Wednesday -- less than a day after the number of confirmed coronavirus cases in the United States topped 1 million. Kushner painted a rosy picture for \"Fox and Friends\" Wednesday morning, saying that \"the federal government rose to the challenge and this is a great success story and I think that that's really what needs to be told.\"" # model = BartForConditionalGeneration.from_pretrained('./bart_sum/checkpointepoch=2.ckpt') # tokenizer = BartTokenizer.from_pretrained('./bart_sum/checkpointepoch=2.ckpt') model = BartForConditionalGeneration.from_pretrained('bart-large') tokenizer = BartTokenizer.from_pretrained('bart-large') state = torch.load('./bart_sum/checkpointepoch=2.ckpt',map_location='cpu') model.load_state_dict(state) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) model.eval() inputs = tokenizer.batch_encode_plus( [ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') summary_ids = model.generate( inputs['input_ids'], num_beams=1, max_length=512, early_stopping=True) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) ``` I tried both loading the finetuned checkpoint directly as well as loading bart-large and setting state dict For former it gives me ``` Traceback (most recent call last): File "generate.py", line 10, in <module> model = BartForConditionalGeneration.from_pretrained('./bart_sum/checkpointepoch=2.ckpt') File "/datastor/Softwarez/miniconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 438, in from_pretrained **kwargs, File "/datastor/Softwarez/miniconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 200, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/datastor/Softwarez/miniconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 252, in get_config_dict config_dict = cls._dict_from_json_file(resolved_config_file) File "/datastor/Softwarez/miniconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 344, in _dict_from_json_file text = reader.read() File "/datastor/Softwarez/miniconda3/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte ``` For latter ` Unexpected key(s) in state_dict: "epoch", "global_step", "checkpoint_callback_best", "optimizer_states", "lr_schedulers", "state_dict", "hparams", "hparams_type". `
05-04-2020 08:31:40
05-04-2020 08:31:40
If you used pytorch-lightning for training then you can load the weights from checkpoint as follows ``` ckpt = torch.load('./bart_sum/checkpointepoch=2.ckpt') model.load_state_dict(ckpt['state_dict']) ``` once you load the weights this way then save the model using the `.save_pretrained` method so that next time you can load it using `.from_pretrained`<|||||>@patil-suraj, I tried your suggestion on finetuned BART checkpoint; though this gives me the following error, P.S. Model and tokenizer used is "bart-large" `--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-10-d2e409f72e4e> in <module>() 1 ckpt = torch.load('./OUTPUT_DIR/checkpointcheckpoint_ckpt_epoch_2.ckpt') ----> 2 model.load_state_dict(ckpt['state_dict']) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict) 845 if len(error_msgs) > 0: 846 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( --> 847 self.__class__.__name__, "\n\t".join(error_msgs))) 848 return _IncompatibleKeys(missing_keys, unexpected_keys) 849 RuntimeError: Error(s) in loading state_dict for BartForConditionalGeneration: Missing key(s) in state_dict: "final_logits_bias", "model.shared.weight", "model.encoder.embed_tokens.weight", "model.encoder.embed_positions.weight", "model.encoder.layers.0.self_attn.k_proj.weight", "model.encoder.layers.0.self_attn.k_proj.bias", "model.encoder.layers.0.self_attn.v_proj.weight", "model.encoder.layers.0.self_attn.v_proj.bias", "model.encoder.layers.0.self_attn.q_proj.weight", "model.encoder.layers.0.self_attn.q_proj.bias", "model.encoder.layers.0.self_attn.out_proj.weight", "model.encoder.layers.0.self_attn.out_proj.bias", "model.encoder.layers.0.self_attn_layer_norm.weight", "model.encoder.layers.0.self_attn_layer_norm.bias", "model.encoder.layers.0.fc1.weight", "model.encoder.layers.0.fc1.bias", "model.encoder.layers.0.fc2.weight", "model.encoder.layers.0.fc2.bias", "model.encoder.layers.0.final_layer_norm.weight", "model.encoder.layers.0.final_layer_norm.bias", "model.encoder.layers.1.self_attn.k_proj.weight", "model.encoder.layers.1.self_attn.k_proj.bias", "model.encoder.layers.1.self_attn.v_proj.weight", "model.encoder.layers.1.self_attn.v_proj.bias", "model.encoder.layers.1.self_attn.q_proj.weight", "model.encoder.layers.1.self_attn.q_proj.bias", "model.encoder.layers.1.self_attn.out_proj.weight", "model.encoder.layers.1.self_attn.out_proj.bias", "model.encoder.layers.1.self_attn_layer_norm.weight", "model.encoder.layers.1.self_attn_layer_norm.bias", "model.encoder.layers.1.fc1.weight", "model.encoder.layers.1.fc1.bias", "model.encoder.layers.1.fc2... Unexpected key(s) in state_dict: "model.final_logits_bias", "model.model.shared.weight", "model.model.encoder.embed_tokens.weight", "model.model.encoder.embed_positions.weight", "model.model.encoder.layers.0.self_attn.k_proj.weight", "model.model.encoder.layers.0.self_attn.k_proj.bias", "model.model.encoder.layers.0.self_attn.v_proj.weight", "model.model.encoder.layers.0.self_attn.v_proj.bias", "model.model.encoder.layers.0.self_attn.q_proj.weight", "model.model.encoder.layers.0.self_attn.q_proj.bias", "model.model.encoder.layers.0.self_attn.out_proj.weight", "model.model.encoder.layers.0.self_attn.out_proj.bias", "model.model.encoder.layers.0.self_attn_layer_norm.weight", "model.model.encoder.layers.0.self_attn_layer_norm.bias", "model.model.encoder.layers.0.fc1.weight", "model.model.encoder.layers.0.fc1.bias", "model.model.encoder.layers.0.fc2.weight", "model.model.encoder.layers.0.fc2.bias", "model.model.encoder.layers.0.final_layer_norm.weight", "model.model.encoder.layers.0.final_layer_norm.bias", "model.model.encoder.layers.1.self_attn.k_proj.weight", "model.model.encoder.layers.1.self_attn.k_proj.bias", "model.model.encoder.layers.1.self_attn.v_proj.weight", "model.model.encoder.layers.1.self_attn.v_proj.bias", "model.model.encoder.layers.1.self_attn.q_proj.weight", "model.model.encoder.layers.1.self_attn.q_proj.bias", "model.model.encoder.layers.1.self_attn.out_proj.weight", "model.model.encoder.layers.1.self_attn.out_proj.bias", "model.model.encoder.layers.1.self... ` Please let me know how to tackle this?<|||||>@pranavpawar3 here `model` should be an instance of the `LighteningModule`. Initialize the `LighteningModule`, then you'll be able to do it this way ``` ckpt = torch.load('./OUTPUT_DIR/checkpointcheckpoint_ckpt_epoch_2.ckpt') model.load_state_dict(ckpt['state_dict']) # save the inner pretrained model model.model.save_pretrained('model_dir') # then you can load it using BartForConditionalGeneration BartForConditionalGeneration.from_pretrained('model_dir') ```<|||||>@patil-suraj Initiating model as LighteningModule instance worked, Thanks!!<|||||>@pranavpawar3 can I ask you to share how you initialized the LightningModule instance to make it compatible with the model you fine-tuned based on the pretrained bart-large model? I'm having the same issue. thanks!<|||||>@patil-suraj could you please show how to Initialize model as LighteningModule instance. Have the same problem with loading finetuned bart ckpt. Thanks in advance!<|||||>@sshleifer thanks for the link, meanwhile i managed to do what i wanted. anyway will be glad to see further improvements for summarisation tasks. for those who finetuned BART model with finetune_bart.sh and wants to load it in pytorch, the next thing worked for me. ``` class BartModel(pl.LightningModule): def __init__(self): super().__init__() self.model = BartForConditionalGeneration.from_pretrained('facebook/bart-large') def forward(self): pass ckpt = torch.load('./bart_sum/checkpointepoch=1.ckpt') bart_model = BartModel() bart_model.load_state_dict(ckpt['state_dict']) bart_model.model.save_pretrained("working_dir") ``` <|||||>Just merged a bunch of changes to the summarization finetuning code. Long description [here] https://github.com/huggingface/transformers/pull/4951. Would love it if somebody could take the new README/code for a spin! Some improvements (sorry to repeat myself): - you can finetune bart a lot faster with `--freeze_encoder` and `--freeze_embeds`. - you can collaborate with the community on hyperparams/modifications for the XSUM task using `--logger wandb_shared` - upgrade to pytorch_lightning==0.7.6 - You get a huggingface style checkpoint associated with the `.ckpt` checkpoint using the new rouge2 based model checkpoint. - Rouge (the canonical summarization metric) is calculated at every val step, this is slow. So you can use `--val_check_interval 0.1 --n_val 500` to compute rouge more frequently on a subset of the validation set. It's probably not perfect at the moment, so I'd love to know if anything is confusing or broken, either here or in a new issue :) Thanks! <|||||>@sshleifer hi, i checked changes. It works well, thanks for automatic model saving to pytorch format. Also good to see tips in readme to use bart-large-xsum for short summaries, i tried it with my dataset instead of bart-large and this improved my score! Are there any tips for t5? What type of t5 model is better for short summaries?<|||||>> @sshleifer thanks for the link, meanwhile i managed to do what i wanted. anyway will be glad to see further improvements for summarisation tasks. > > for those who finetuned BART model with finetune_bart.sh and wants to load it in pytorch, the next thing worked for me. > > ``` > class BartModel(pl.LightningModule): > def __init__(self): > super().__init__() > self.model = BartForConditionalGeneration.from_pretrained('facebook/bart-large') > > def forward(self): > pass > > ckpt = torch.load('./bart_sum/checkpointepoch=1.ckpt') > > bart_model = BartModel() > bart_model.load_state_dict(ckpt['state_dict']) > bart_model.model.save_pretrained("working_dir") > ``` Thx :D. Answer was helpful
transformers
4,143
closed
Camembert-large-fquad model card
Description for the model card describing the camembert-large-fquad model.
05-04-2020 08:17:17
05-04-2020 08:17:17
transformers
4,142
closed
New model addition: Blender (Facebook chatbot)
# 🌟 New model addition ## Model description - Dialogue response generation model for multiturn conversations, open-sourced by Facebook (28 April 2020) - 3 models: 90m, 2.7b and 9.4b parameters - New state-of-the-art "in terms of engagingness and humanness measurements" - Blog post: https://parl.ai/projects/blender/ - Paper: https://arxiv.org/abs/2004.13637 - Code: Included in Facebook dialogue framework ParlAI https://github.com/facebookresearch/ParlAI ## Open source status * [ ] the model implementation is available: - possibly yes, together with the weights * [x] the model weights are available: - Need to install ParlAI (https://github.com/facebookresearch/ParlAI). Model weights are then downloaded when running `python parlai/scripts/safe_interactive.py -t blended_skill_talk -mf zoo:blender/blender_3B/model`. Compare https://github.com/facebookresearch/ParlAI/tree/master/parlai/zoo/blender * [x] who are the authors: (mention them, if possible by @gh-username) - Facebook AI Research: - Stephen Roller, Emily Dinan, Jason Weston - and Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott,Kurt Shuster, Eric M. Smith, Y-Lan Boureau,
05-04-2020 07:54:13
05-04-2020 07:54:13
@tx1985 awesome idea<|||||>Very Cool !<|||||>+1<|||||>Hey, I want to ask a question regarding Facebook Blender Chatbot. I want to train my custom data into blender. how can i do that? Please help<|||||>Sorry, I know how to use blender as is but not how to fine tune it. I think that you would require a significant amount of compute to do that ! On Fri, 5 Jun 2020 at 12:09, Nitesh Kumar <[email protected]> wrote: > Hey, > I want to ask a question regarding Facebook Blender Chatbot. I want to > train my custom data into blender. how can i do that? Please help > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/4142#issuecomment-639415197>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/APESH7GRICJLNEEJIGRPN5DRVDG73ANCNFSM4MYRA2EQ> > . > <|||||>Very cool!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,141
closed
🚀 An advice about changing variable name from "attention_mask" to "adder"
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation I noticed that some users are pretty confused when reading source codes about variable `attention_mask` like: [What is the meaning of Attention Mask #205](https://github.com/huggingface/transformers/issues/205) [Clarifying attention mask #542](https://github.com/huggingface/transformers/issues/542) And I refer to the origional BERT repository - [google-research/bert](https://github.com/google-research/bert). Compared to the origin, I find in this repo sometimes the concepts of `attention_mask` and `adder` are mixed. refering original BERT: [./modeling.py#L707](https://github.com/google-research/bert/blob/master/modeling.py#L707) ```python attention_mask = tf.expand_dims(attention_mask, axis=[1]) adder = (1.0 - tf.cast(attention_mask, tf.float32)) * -10000.0 attention_scores += adder ``` But in this repo: take [src/transformers/modeling_tf_openai.py#L282](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_openai.py#L282) as an example: ```python attention_mask = attention_mask[:, tf.newaxis, tf.newaxis, :] attention_mask = tf.cast(attention_mask, tf.float32) attention_mask = (1.0 - attention_mask) * -10000.0 ``` and inside the method `TFAttention._attn()` [src/transformers/modeling_tf_openai.py#L112](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_openai.py#L112): ```python if attention_mask is not None: # Apply the attention mask w = w + attention_mask ``` <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution may be changing its name is way better? like: ```python attention_mask = attention_mask[:, tf.newaxis, tf.newaxis, :] attention_mask = tf.cast(attention_mask, tf.float32) adder = (1.0 - attention_mask) * -10000.0 ``` and then: ```python if adder is not None: # Apply the attention mask attention_score = w + adder ``` <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
05-04-2020 07:40:39
05-04-2020 07:40:39
I agree! Do you want to open a PR about this to change the naming? :-) When doing this we just have to be careful to not change the user facing api when doing this -> which means that ideally, we should not rename any function arguments of high level modules like `BertModel.forward()`.<|||||>I've created PR #4566 for this issue<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,140
closed
How to make run_language_modeling.py work for transformer-xl?
The code in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) does Language modeling for any pre-trained model. However, for transformer-xl, we need additional arguments to the forward function like `mems` and even [run_transfo_xl.py](https://github.com/huggingface/transformers/blob/master/examples/contrib/run_transfo_xl.py) performs `model.reset_length()` operation using `mem_len` and `tgt_len` as arguments. I am not sure how these functions are performed in this generic code for runningLM for any pre-trained model. Any suggestions will be helpful.
05-04-2020 06:37:49
05-04-2020 06:37:49
Yeah we don't have a run_language_modeling.py script yet. It's on a TODO list :-). Also see: #2008 . If you feel like writing one, it would be great :-)
transformers
4,139
closed
Cannot set max_position_embeddings to any desired value in T5Config
# 🐛 Bug ## Information The model I am using (T5): Language I am using the model on (English): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ``` config = T5Config(max_position_embeddings=1024) ``` ## To reproduce Steps to reproduce the behaviour: 1.Initiate T5Config 2. set max_position_embeddings any desired value like 1024, 2048 (as mentioned in docs) This works perfectly fine with BertConfig This is an error: <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` Can't set max_position_embeddings with value 1024 for T5Config { "_num_labels": 2, "architectures": null, "bad_words_ids": null, "bos_token_id": null, "decoder_start_token_id": null, "do_sample": false, "early_stopping": false, "eos_token_id": 1, "finetuning_task": null, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "is_decoder": false, "is_encoder_decoder": true, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "length_penalty": 1.0, "max_length": 20, "min_length": 0, "model_type": "t5", "no_repeat_ngram_size": 0, "num_beams": 1, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pad_token_id": 0, "prefix": null, "pruned_heads": {}, "repetition_penalty": 1.0, "task_specific_params": null, "temperature": 1.0, "top_k": 50, "top_p": 1.0, "torchscript": false, "use_bfloat16": false } --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-44823d82a38e> in <module>() ----> 1 config = T5Config(max_position_embeddings=1024) 2 frames /usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in __init__(self, **kwargs) 106 for key, value in kwargs.items(): 107 try: --> 108 setattr(self, key, value) 109 except AttributeError as err: 110 logger.error("Can't set {} with value {} for {}".format(key, value, self)) AttributeError: can't set attribute ``` After further investigating in docs here is what I found (If I am wrong, then please correct me): ``` self.vocab_size = vocab_size self.n_positions = n_positions self.d_model = d_model self.d_kv = d_kv self.d_ff = d_ff self.num_layers = num_layers self.num_heads = num_heads self.relative_attention_num_buckets = relative_attention_num_buckets self.dropout_rate = dropout_rate self.layer_norm_epsilon = layer_norm_epsilon self.initializer_factor = initializer_factor @property def max_position_embeddings(self): return self.n_positions ``` Looking at code snippets I think we have to directly use n_positions instead of max_position_embeddings. I have tried this approach & **it worked**. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0+cu101 (True) - Tensorflow version (GPU?): 2.2.0-rc3 (True) - Using GPU in script?: True
05-04-2020 06:33:51
05-04-2020 06:33:51
IMO the docstring is confusing here. This PR should fix it: #4422. & You are right you should change `max_position_embeddings` via `n_positions`.
transformers
4,138
closed
[Roberta] fix hard wired pad token id
This might break backward compatibility for users that have uploaded a roberta model and use a different pad token than 1 in the configs.
05-04-2020 00:31:23
05-04-2020 00:31:23
I checked all models on `https://huggingface.co/models?search=roberta` and all of them have the pad_token_id set to `1` which was used as a default before. So this is good to merge for me.
transformers
4,137
closed
Error: `cannot import name 'TFBertForMaskedLM'`
When I try the following import: ```python from transformers import TFBertForMaskedLM ``` I get the following error: ``` Traceback (most recent call last): File "/Users/danielk/ideaProjects/farsi-language-models/src/try_bert_lm.py", line 3, in <module> from transformers import TFBertForMaskedLM ImportError: cannot import name 'TFBertForMaskedLM' from 'transformers' (/usr/local/lib/python3.7/site-packages/transformers/__init__.py) ``` My versions are: ``` torch 1.4.0 transformers 2.8.0 ```
05-03-2020 20:50:04
05-03-2020 20:50:04
The issue was that I did not have TensorFlow installed.
transformers
4,136
closed
Cannot loading SpanBERT pre-trained model
I have some questions regarding SpanBert loading using the transformers package. I downloaded the pre-trained file from [SpanBert](https://github.com/facebookresearch/SpanBERT) GitHub Repo and ```vocab.txt``` from Bert. Here is the code I used for loading: ```python model = BertModel.from_pretrained(config_file=config_file, pretrained_model_name_or_path=model_file, vocab_file=vocab_file) model.to("cuda") ``` where - ```config_file``` -> ```config.json``` - ```model_file``` -> ```pytorch_model.bin``` - ```vocab_file``` -> ```vocab.txt``` But I got the ```UnicodeDecoderError``` with the above code saying that ```'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte``` I also tried loading SpanBert with the method mentioned [here](https://huggingface.co/SpanBERT/spanbert-large-cased). But it returned ```OSError: file SpanBERT/spanbert-base-cased not found```. Do you have any suggestions on loading the pre-trained model correctly? Any suggestions are much appreciated. Thanks!
05-03-2020 20:17:21
05-03-2020 20:17:21
@palooney Can you check again ?. The following call worked for me ``` model = AutoModel.from_pretrained("SpanBERT/spanbert-base-cased") ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Closing, feel free to reopen if it isn't fixed.
transformers
4,135
closed
Why does ALBERT use einsum in PyTorch implementation while in TF one it does not?
# ❓ Questions & Help I wanted to learn internals of ALBERT model from your implementation (which is BTW really clean in comparison to the original one - good job!), but I've stumbled upon weird looking part in the `AlbertAttention`: https://github.com/huggingface/transformers/blob/6af3306a1da0322f58861b1fbb62ce5223d97b8a/src/transformers/modeling_albert.py#L258 Why does PyTorch version use `einsum`-based notation while calculating hidden state (with manual usage of `dense` layer's weights), while the TensorFlow version just reshapes the `context_layer` and does standard "forward" on dense layer? https://github.com/huggingface/transformers/blob/6af3306a1da0322f58861b1fbb62ce5223d97b8a/src/transformers/modeling_tf_albert.py#L296 I would really like to know the explanation of this implementation - @LysandreJik cloud you shed some light here?
05-03-2020 19:26:15
05-03-2020 19:26:15
The two implementations are equivalent, but the Pytorch version is cumbersome. I think the code ```python context_layer = context_layer.permute(0, 2, 1, 3).contiguous() # Should find a better way to do this w = ( self.dense.weight.t() .view(self.num_attention_heads, self.attention_head_size, self.hidden_size) .to(context_layer.dtype) ) b = self.dense.bias.to(context_layer.dtype) projected_context_layer = torch.einsum("bfnd,ndh->bfh", context_layer, w) + b ``` should be rewritten by ```python context_layer = context_layer.permute(0, 2, 1, 3).contiguous() new_shape = context_layer.size()[:-2] + (-1,) context_layer = context_layer.view(*new_shape) projected_context_layer = self.dense(context_layer) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Ping @LysandreJik <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Bump<|||||>Ping @LysandreJik again<|||||>For no particular reason, but it might not have been the best choice according to [this thread on performance](https://github.com/huggingface/transformers/issues/6771).<|||||>OK, thanks.
transformers
4,134
closed
jplu/tf-xlm-roberta-large showing random performance
# 🐛 jplu/tf-xlm-roberta-large showing random performance on tpu model training ## i have observed several times that jplu/tf-xlm-roberta-large showing random performance on several experiments,for example if i train a model and get validation accuracy 61% then if i run the model again (without changing anythin or doing 0% change in code) then second i get over 96% accuracy and sometimes i observed that validation accuracy stay same for all epoch and again this same code will get me totally different train statistics next time,why is this happening? Model I am using jplu/tf-xlm-roberta-large Language I am using the model on (English, Chinese ,turkish,spanish etc): The problem arises when using: jplu/tf-xlm-roberta-large for finetuning The tasks I am working on is: kaggle ongoing competition : jigsaw multilingual toxic comment classification where train set contails all english comments and 0/1(toxic/non toxic) label and validation+test data contains non English comments ## To reproduce Steps to reproduce the behavior: 1. just simply fork and run this kernel : https://www.kaggle.com/mobassir/understanding-cross-lingual-models and you will never get same result and probably will get bit more worst result ## Expected behavior same model training several times should always produce close result if not exact same, the randomness is high,,sometimes model with 91% validation accuracy can get us 61% validation accuracy on next run and sometimes validation accuracy of that same model get frozen and seems like the model is not learning anything (i repeat the model name : it's tf xlm roberta large) ## Environment info kaggle kernels/notebooks using tpu
05-03-2020 18:43:57
05-03-2020 18:43:57
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>same here, mine is torch version, though. @mobassir94 Did you fix it?<|||||>@HenryPaik1 no i also faced this issue in both pytorch xla and in tf tpu
transformers
4,133
closed
problem about change from pytorch-pretrained-bert to transformers
I am doing the sentiment analysis with pytorch-pretrained-bert, it work correct, but when I change to transformers, the accuracy on the dev data is poor (but the accuracy with pytorch-pretrained-bert is quite higher). Besides, I notice the Migrating from pytorch-pretrained-bert to transformers module. the input is: [CLS] sentence [SEP] aspect [SEP] and just using BertForSequenceClassification (also try add an classifier after BertModel), but all fail on transformers. Can Sombody figure out what's wrong with it? Thanks!
05-03-2020 15:50:00
05-03-2020 15:50:00
OK, the problem is, in transformers, forward() method args position is little bit different compare to pytorch-pretrained-bert. the order change from (token, segment, mask) to (token, mask, segment). So pass the args by name will always right, lol
transformers
4,132
closed
Create README.md
05-03-2020 12:18:59
05-03-2020 12:18:59
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=h1) Report > Merging [#4132](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6af3306a1da0322f58861b1fbb62ce5223d97b8a&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4132/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4132 +/- ## ======================================= Coverage 78.84% 78.84% ======================================= Files 114 114 Lines 18689 18689 ======================================= + Hits 14735 14736 +1 + Misses 3954 3953 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.55% <0.00%> (+0.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=footer). Last update [6af3306...d6aedbe](https://codecov.io/gh/huggingface/transformers/pull/4132?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks, @savasy! Can you add code fences around the code blocks?
transformers
4,131
closed
Make transformers-cli cross-platform
Using `scripts` is a useful option in `setup.py` particularly when you want to get access to non-python scripts. However, in this case we want to have an entry point into some of our own Python scripts. To do this in a concise, cross-platfom way, we can use `entry_points.console_scripts`. This change is necessary to provide the CLI on different platforms (particularly also including Windows), which "scripts" does not ensure. Usage remains the same, but the "transformers-cli" script has to be moved (be part of the library) and renamed (underscore + extension). So even though the script itself has been renamed to `transformers_cli.py` and moved to `src/commands`, its usage is still the same: `transformers-cli <args>`. Tested on Ubuntu and Windows.
05-03-2020 10:06:05
05-03-2020 10:06:05
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=h1) Report > Merging [#4131](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1cdd2ad2afb73f6af185aafecb7dd7941a90c4d1&el=desc) will **decrease** coverage by `0.11%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4131/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4131 +/- ## ========================================== - Coverage 78.85% 78.74% -0.12% ========================================== Files 114 115 +1 Lines 18688 18712 +24 ========================================== - Hits 14737 14734 -3 - Misses 3951 3978 +27 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/commands/transformers\_cli.py](https://codecov.io/gh/huggingface/transformers/pull/4131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFuc2Zvcm1lcnNfY2xpLnB5) | `0.00% <0.00%> (ø)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.14% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.92% <0.00%> (-0.13%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=footer). Last update [1cdd2ad...3f8c3f5](https://codecov.io/gh/huggingface/transformers/pull/4131?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thoughts @aaugustin?<|||||>Yes, this is the best way to provide a cross-platform command in a Python library. See https://github.com/django/django/pull/2116 for another project making a similar change.<|||||>Alright, good for you @LysandreJik @thomwolf?
transformers
4,130
closed
GPT-2 past behaves incorrectly when attention_mask is used
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT-2 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python from transformers import GPT2LMHeadModel import torch model = GPT2LMHeadModel.from_pretrained('gpt2') model.eval() # w/ `attention_mask` w/ `past` input_ids = torch.tensor([[0, 0, 1, 2]]) position_ids = torch.tensor([[0, 0, 1, 2]]) attention_mask = torch.tensor([[0, 1, 1, 1]]) output = model(input_ids=input_ids, position_ids=position_ids, attention_mask=attention_mask) input_ids = torch.tensor([[3]]) position_ids = torch.tensor([[3]]) attention_mask = torch.tensor([[1]]) output = model(input_ids=input_ids, position_ids=position_ids, attention_mask=attention_mask, past=output[1]) print(output[0][0, -1, :].detach().numpy()) # w/ `attention_mask` w/o `past` input_ids = torch.tensor([[0, 0, 1, 2, 3]]) position_ids = torch.tensor([[0, 0, 1, 2, 3]]) attention_mask = torch.tensor([[0, 1, 1, 1, 1]]) output = model(input_ids=input_ids, position_ids=position_ids, attention_mask=attention_mask) print(output[0][0, -1, :].detach().numpy()) # w/o `attention_mask` w/ `past` input_ids = torch.tensor([[0, 1, 2]]) position_ids = torch.tensor([[0, 1, 2]]) output = model(input_ids=input_ids, position_ids=position_ids) input_ids = torch.tensor([[3]]) position_ids = torch.tensor([[3]]) output = model(input_ids=input_ids, position_ids=position_ids, past=output[1]) print(output[0][0, -1, :].detach().numpy()) # w/o `attention_mask` w/o `past` input_ids = torch.tensor([[0, 1, 2, 3]]) position_ids = torch.tensor([[0, 1, 2, 3]]) output = model(input_ids=input_ids, position_ids=position_ids) print(output[0][0, -1, :].detach().numpy()) ``` ## Expected behavior All printed outputs should be equal. But this is what I see: ``` [-63.085373 -62.34121 -61.08647 ... -76.70221 -73.235085 -65.88774 ] [-67.13892 -66.132126 -64.81706 ... -80.93089 -77.780525 -70.23946 ] [-67.1389 -66.13211 -64.81704 ... -80.930855 -77.780495 -70.239426] [-67.13892 -66.132126 -64.81706 ... -80.93089 -77.780525 -70.23946 ] ``` Specifically, it seems like `past` does not work correctly when used with `attention_mask`. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: Linux-5.4.19-100.fc30.x86_64-x86_64-with-fedora-30-Thirty - Python version: 3.7.6 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
05-03-2020 07:36:42
05-03-2020 07:36:42
Solved. `attention_mask ` should be the full tensor (not just the last step), even when `past` is used.
transformers
4,129
closed
Parse in tensorflow strings as well as normal strings
# 🚀 Feature request So this is something I came across while trying to get a (fast) streaming dataset via the tensorflow data API for TPUs. This is a crosspost of [this SO question](https://stackoverflow.com/questions/61555097/mapping-text-data-through-huggingface-tokenizer/). I have my encode function that looks like this: ```python from transformers import BertTokenizer, BertModel MODEL = 'bert-base-multilingual-uncased' tokenizer = BertTokenizer.from_pretrained(MODEL) def encode(texts, tokenizer=tokenizer, maxlen=10): # import pdb; pdb.set_trace() inputs = tokenizer.encode_plus( texts, return_tensors='tf', return_attention_masks=True, return_token_type_ids=True, pad_to_max_length=True, max_length=maxlen ) return inputs['input_ids'], inputs["token_type_ids"], inputs["attention_mask"] ``` I want to get my data **encoded on the fly** by doing this: ```python x_train = (tf.data.Dataset.from_tensor_slices(df_train.comment_text.astype(str).values) .map(encode)) ``` However, this chucks the error: ``` ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers. ``` Now from my understanding when I set a breakpoint inside `encode` it was because I was sending a non-numpy array. How do I get huggingface transformers to play nice with tensorflow strings as inputs? If you need a dummy dataframe here it is: ``` df_train = pd.DataFrame({'comment_text': ['Today was a good day']*5}) ``` ## What I tried So I tried to use `from_generator` so that I can parse in the strings to the `encode_plus` function. However, this does not work with TPUs. ```python AUTO = tf.data.experimental.AUTOTUNE def get_gen(df): def gen(): for i in range(len(df)): yield encode(df.loc[i, 'comment_text']) , df.loc[i, 'toxic'] return gen shapes = ((tf.TensorShape([maxlen]), tf.TensorShape([maxlen]), tf.TensorShape([maxlen])), tf.TensorShape([])) train_dataset = tf.data.Dataset.from_generator( get_gen(df_train), ((tf.int32, tf.int32, tf.int32), tf.int32), shapes ) train_dataset = train_dataset.batch(BATCH_SIZE).prefetch(AUTO) ``` My final solution was to encode the all the strings using `batch_encode_plus` but to write those tensors out to disk. This currently takes ~1 hour, but seems like an inefficient way of prepping data for training. ## Version Info: `transformers.__version__, tf.__version__` => `('2.7.0', '2.1.0')`
05-03-2020 05:11:36
05-03-2020 05:11:36
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hey, any updates about this issue? I'm facing the exact same problem!<|||||>@sachinruk @joaopedromattos any luck here in the end :)
transformers
4,128
closed
Add decoder specific error message for T5Stack.forward
#### This PR adds a decoder specific error message in `T5Stack.forward` in the case that `self.is_decoder == True` This was requested in #4126 and suggested by @patrickvonplaten in #3626 . Confirmed that this works as expected in this colab [notebook](https://colab.research.google.com/drive/1j1mdtOylXZClH-ikZthDiOxXg4DOis_4?usp=sharing). Missing `decoder_input_ids` example: ``` ## Setup t5-small and tokenized inputs from transformers import T5Tokenizer, T5Model tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5Model.from_pretrained('t5-small') input_ids = tokenizer.encode("Hello, my dog is cute", return_tensors="pt") # Batch size 1 # Test missing `decoder_input_ids` model(input_ids=input_ids)[0] # ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds ``` Missing `input_ids` example: ``` model(decoder_input_ids=input_ids)[0] # ValueError: You have to specify either input_ids or inputs_embeds ```
05-03-2020 03:34:17
05-03-2020 03:34:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=h1) Report > Merging [#4128](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1cdd2ad2afb73f6af185aafecb7dd7941a90c4d1&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4128/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4128 +/- ## ========================================== - Coverage 78.85% 78.84% -0.01% ========================================== Files 114 114 Lines 18688 18689 +1 ========================================== - Hits 14737 14736 -1 - Misses 3951 3953 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.66% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.92% <0.00%> (-0.13%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=footer). Last update [1cdd2ad...a301d78](https://codecov.io/gh/huggingface/transformers/pull/4128?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,127
closed
Model output should be dict
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> I think model output should be a dict. Since dict are more clearly than tuple, we do not know what output[1] means, but we do know output["pooled_ouptut"] means. https://github.com/huggingface/transformers#models-always-output-tuples ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
05-03-2020 03:31:37
05-03-2020 03:31:37
We will soon convert the outputs of models to named tuples to that it will be more readable :-)
transformers
4,126
closed
No decoder specific error message for T5Stack.forward
When I run a forward pass with `T5Model` with a missing value for `decoder_input_ids`, it still gives me the same error message compared to when there is a missing value for `input_ids` (below). `ValueError: You have to specify either input_ids or inputs_embeds` I've confirmed with @patrickvonplaten that it makes sense to have a specific error message for the case when `self.is_decoder == True` in issue #3626.
05-03-2020 03:21:43
05-03-2020 03:21:43
transformers
4,125
closed
Is it possible to have high perplexity of some individual sentences, as compared to overall testing corpus perplexity?
Hi, I have calculated an overall perplexity of my testing corpus after finetuning GPT-2 pretrained model Similarly, I have also calculated perplexity of individual sentences, after extracting it from my testing corpus. **I notice that some individual sentence perplexity is even higher than the overall perplexity.** What will be the reason for it? Will it be possible? Have I done something wrong in my calculation? Kindly share your opinion.
05-02-2020 23:20:43
05-02-2020 23:20:43
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,124
closed
convert pytorch_model.pt [pretrained Bert model] to pytorch_model.onnx (ONNX)
i use this to convert pytorch_model.pt to pytorch_model.onnx ``` from torch.autograd import Variable import torch.onnx import torchvision import torch dummy_input = Variable(torch.randn(1, 3, 256, 256)) model = torch.load('./pytorch_model.pt') torch.onnx.export(model, dummy_input, "pytorch_model.onnx") ``` ERROR :- `The expanded size of the tensor (256) must match the existing size (3) at non-singleton dimension 3. Target sizes: [1, 3, 256, 256]. Tensor sizes: [1, 3]` Are my `dummy_inputs` incorrect ?
05-02-2020 19:30:18
05-02-2020 19:30:18
May be of interest to @mfuntowicz <|||||>Hi @pumpkinband, I don't have the exact details of the model you're trying to load, but I'd assume a tensor of shape (1, 3) would make more sense (batch, sequence). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,123
closed
Text corrections
05-02-2020 18:44:18
05-02-2020 18:44:18
@lapolonio - thanks a lot for these fixes. I already applied them in the main PR :-)
transformers
4,122
closed
ValueError: You are attempting to pad samples but the tokenizer you are using (GPT2Tokenizer) does not have one.
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...):GPT2 Language I am using the model on (English, Chinese ...):English The problem arises when using: my own modified scripts: (give details below) python examples/run_language_modeling.py \ --train_data_file=temp_gpt2/gpt2.train \ --output_dir=checkpoints/gpt2 \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --eval_data_file=temp_gpt2/test.txt \ --line_by_line \ --do_train \ --do_eval \ --evaluate_during_training \ --per_gpu_train_batch_size=20 \ --per_gpu_eval_batch_size=20 \ --gradient_accumulation_steps=1 \ --learning_rate=8e-5 \ --weight_decay=0.075 \ --adam_epsilon=1e-8 \ --warmup_steps=500 \ --max_grad_norm=5.0 \ --num_train_epochs=20 \ --logging_steps=500 \ --save_steps=500 The tasks I am working on is: * [ ] an official GLUE/SQUaD task: language modeling * [ ] my own task or dataset: (give details below) Conll2014 GEC ## To reproduce Steps to reproduce the behavior: 1. run the script I get the following error: ``` bash train_gpt2.sh 05/02/2020 10:14:25 - INFO - transformers.training_args - PyTorch: setting up devices 05/02/2020 10:14:25 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: False 05/02/2020 10:14:25 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='checkpoints/gpt2', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=True, per_gpu_train_batch_size=20, per_gpu_eval_batch_size=20, gradient_accumulation_steps=1, learning_rate=8e-05, weight_decay=0.075, adam_epsilon=1e-08, max_grad_norm=5.0, num_train_epochs=20.0, max_steps=-1, warmup_steps=500, logging_dir=None, logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1) 05/02/2020 10:14:35 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json from cache at /home/zixi/.cache/torch/transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.085d5f6a8e7812ea05ff0e6ed0645ab2e75d80387ad55c1ad9806ee70d272f80 05/02/2020 10:14:35 - INFO - transformers.configuration_utils - Model config GPT2Config { "activation_function": "gelu_new", "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "model_type": "gpt2", "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_layer": 12, "n_positions": 1024, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "vocab_size": 50257 } 05/02/2020 10:14:39 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json from cache at /home/zixi/.cache/torch/transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.4c1d7fc2ac6ddabeaf0c8bec2ffc7dc112f668f5871a06efcff113d2797ec7d5 05/02/2020 10:14:39 - INFO - transformers.configuration_utils - Model config GPT2Config { "activation_function": "gelu_new", "architectures": [ "GPT2LMHeadModel" ], "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "model_type": "gpt2", "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_layer": 12, "n_positions": 1024, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "vocab_size": 50257 } 05/02/2020 10:14:42 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json from cache at /home/zixi/.cache/torch/transformers/f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71 05/02/2020 10:14:42 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt from cache at /home/zixi/.cache/torch/transformers/d629f792e430b3c76a1291bb2766b0a047e36fae0588f9dbc1ae51decdff691b.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda 05/02/2020 10:14:43 - INFO - transformers.modeling_utils - loading weights file https://cdn.huggingface.co/gpt2-pytorch_model.bin from cache at /home/zixi/.cache/torch/transformers/d71fd633e58263bd5e91dd3bde9f658bafd81e11ece622be6a3c2e4d42d8fd89.778cf36f5c4e5d94c8cd9cefcf2a580c8643570eb327f0d4a1f007fab2acbdf1 05/02/2020 10:14:47 - INFO - transformers.modeling_utils - Weights of GPT2LMHeadModel not initialized from pretrained model: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight'] 05/02/2020 10:14:47 - INFO - transformers.data.datasets.language_modeling - Creating features from dataset file at temp_gpt2/gpt2_train.txt 05/02/2020 10:16:41 - INFO - transformers.data.datasets.language_modeling - Creating features from dataset file at temp_gpt2/gpt2_test.txt 05/02/2020 10:16:44 - INFO - transformers.trainer - ***** Running training ***** 05/02/2020 10:16:44 - INFO - transformers.trainer - Num examples = 1130686 05/02/2020 10:16:44 - INFO - transformers.trainer - Num Epochs = 20 05/02/2020 10:16:44 - INFO - transformers.trainer - Instantaneous batch size per GPU = 20 05/02/2020 10:16:44 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 40 05/02/2020 10:16:44 - INFO - transformers.trainer - Gradient Accumulation steps = 1 05/02/2020 10:16:44 - INFO - transformers.trainer - Total optimization steps = 565360 Epoch: 0%| | 0/20 [00:00<?, ?it/sTraceback (most recent call last): | 0/28268 [00:00<?, ?it/s] File "examples/run_language_modeling.py", line 284, in <module> main() File "examples/run_language_modeling.py", line 254, in main trainer.train(model_path=model_path) File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/transformers/trainer.py", line 307, in train for step, inputs in enumerate(epoch_iterator): File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/tqdm/std.py", line 1107, in __iter__ for obj in iterable: File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/transformers/data/data_collator.py", line 91, in collate_batch batch = self._tensorize_batch(examples) File "/home/zixi/anaconda3/envs/ROC/lib/python3.7/site-packages/transformers/data/data_collator.py", line 106, in _tensorize_batch "You are attempting to pad samples but the tokenizer you are using" ValueError: You are attempting to pad samples but the tokenizer you are using (GPT2Tokenizer) does not have one. Epoch: 0%| | 0/20 [00:00<?, ?it/s] Iteration: 0%| ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:master - Platform:Ubuntu 18.04LTS - Python version:3.7 - PyTorch version (GPU?):1.5 - Tensorflow version (GPU?):-- - Using GPU in script?:yes - Using distributed or parallel set-up in script?:No
05-02-2020 17:32:10
05-02-2020 17:32:10
I had the same error; I think the problem lies in "line_by_line" since if I remove this option, my code can run fine.<|||||>@BramVanroy - sorry to just link you here. Did we decide to add a force-padding option here or not yet? <|||||>We had the discussion over here https://github.com/huggingface/transformers/pull/3388#discussion_r407697829 @GCHQResearcher92457 mentions that they "tried to implement your suggestion". Perhaps it's better if you @patrickvonplaten could implement review the changes in the updated PR here https://github.com/huggingface/transformers/pull/4009/commits/5ff6eb7073fc6959b4cd5464defddc101d3163f8 ?<|||||>Any progress about this issue? I have the same problem if I use line_by_line with gpt2. Thanks<|||||>> Any progress about this issue? I have the same problem if I use line_by_line with gpt2. Thanks Same problem with v2.9.1<|||||>Personally, I think the fix is just that you _can't_ use the line_by_line dataset with gpt2 (because it doesn't have a padding token) @patrickvonplaten @BramVanroy Should I just raise an error that tells user to remove the `--line_by_line` flag from their command?<|||||>@julien-c Perhaps you can have a look at PR https://github.com/huggingface/transformers/commit/5ff6eb7073fc6959b4cd5464defddc101d3163f8 there, they suggest to add a force_padding_token option so that if a model does not have a padding token by default, it is added to the vocabulary manually. I have no preference: I like the implementation in the PR but it might not be what you would want or expect. Raising an error is also fine for me.<|||||>@julien-c I am stuck with same error. If not line by line how else can I train the GPT2 model from scratch? Here is my GPT2 config and language Model: from transformers import GPT2LMHeadModel, GPT2Config ``` # Initializing a GPT2 configuration configuration = GPT2Config(vocab_size=52_000) model = GPT2LMHeadModel(config=configuration) The logic for Dataset Preparation: from transformers import LineByLineTextDataset dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="./deu-de_web-public_2019_1M-sentences.txt", block_size=128, ) from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False, ) ``` The training logic: ``` from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./output", overwrite_output_dir=True, num_train_epochs=1, per_gpu_train_batch_size=64, save_steps=10_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, ) trainer.train() ``` > Throws me: ValueError: You are attempting to pad samples but the tokenizer you are using (GPT2Tokenizer) does not have one. <|||||>You need to use `TextDataset` and cannot use line by line at the moment. You can build a large file and use your own special tokens if you build it completely from scratch or just reuse `<|endoftext|>` from pre-trained GPT-2.<|||||>@borisdayma Thaks for the quick reply! Where can I find more how these models can be trained with what kind of datasets and what kind of tokenizers and special tokens? Also, Can this be used for the reformer too? Please help me so that I can create simple and clear collab notebook and share it here so that others can easily use it. <|||||>A good place to start would be the language_modeling section of the [examples page from the doc](https://huggingface.co/transformers/examples.html)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi there, We are facing same issue. From the thread above I am not sure what is the fix. If I dont use line_by_line, it defines its own sequence size and concatenates the sequences from multiple lines which are unrelated. How can I make it take each lines separately as sequence.<|||||>You should set the PAD token manually equal to the EOS token. See https://github.com/huggingface/transformers/issues/2630 as well<|||||>Thanks @patrickvonplaten it worked. All i had to do is to add the following after tokenizer initialization # bug manual fix for GPT2 # https://github.com/huggingface/transformers/issues/3021 if model_args.model_type == 'gpt2': tokenizer.pad_token = tokenizer.eos_token <|||||>I'm not sure of the consequences of this. To be safe you probably also should set the IDs, then. Something like this: ``` tokenizer.pad_token_id = tokenizer.eos_token_id ``` EDIT: this is wrong, see below<|||||>@BramVanroy - it's actually not possible to set the ids equal to each other, doing `tokenizer.pad_token = tokenizer.eos_token` should work and is my recommend way of doing it :-) <|||||>@patrickvonplaten yes I tried that too bot could not set the ids<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> # bug manual fix for GPT2 > # https://github.com/huggingface/transformers/issues/3021 > if model_args.model_type == 'gpt2': > tokenizer.pad_token = tokenizer.eos_token I used this solution and the error went away. Though, this introduced a new problem for me - the model couldn't generate <|endoftext|> during inference. The model didn't learn to generate eos_token because it was ignored while computing the loss as it is same as pad_token. I had to use some other token as pad_token. Other than this, I also had to add eos_token to each list in LineByLineDataset.examples. Note: I am using transformers 3.4.0.
transformers
4,121
closed
`eos_token_id` breaks the `T5ForConditionalGeneration`
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): `T5ForConditionalGeneration` Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ```python from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration T5_PATH = 't5-base' # "t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b" DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # My envirnment uses CPU t5_tokenizer = T5Tokenizer.from_pretrained(T5_PATH) t5_config = T5Config.from_pretrained(T5_PATH) t5_mlm = T5ForConditionalGeneration.from_pretrained(T5_PATH, config=t5_config).to(DEVICE) text = 'India is a <extra_id_0> of the world. </s>' encoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt') input_ids = encoded['input_ids'].to(DEVICE) # I want to set <extra_id_1> as end of a sequence. eos_token_id = t5_tokenizer.encode('<extra_id_1>')[0] outputs = t5_mlm.generate(input_ids=input_ids, num_beams=200, num_return_sequences=20, eos_token_id=eos_token_id, # <-- This is causes the error. Just removing it works fine. max_length=5) results = [t5_tokenizer.decode(output, skip_special_tokens=False, clean_up_tokenization_spaces=False) for output in outputs] print(results) ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) * [x] Just playing with the `T5` model ## To reproduce Steps to reproduce the behavior: 1. Start a new Google Colab notebook, 2. Install transformers using `pip install git+https://github.com/huggingface/transformers`, 3. Restart the runtime, and 4. Run the above snippet. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` AssertionError Traceback (most recent call last) <ipython-input-86-aae5bd45e59c> in <module>() 2 num_beams=200, num_return_sequences=20, max_length=5, 3 # repetition_penalty=1.2, ----> 4 eos_token_id=eos_token_id, 5 ) 6 # print(outputs) 2 frames /usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 13 def decorate_context(*args, **kwargs): 14 with self: ---> 15 return func(*args, **kwargs) 16 return decorate_context 17 /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id) 993 vocab_size=vocab_size, 994 encoder_outputs=encoder_outputs, --> 995 attention_mask=attention_mask, 996 ) 997 else: /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in _generate_beam_search(self, input_ids, cur_len, max_length, min_length, do_sample, early_stopping, temperature, top_k, top_p, repetition_penalty, no_repeat_ngram_size, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, decoder_start_token_id, batch_size, num_return_sequences, length_penalty, num_beams, vocab_size, encoder_outputs, attention_mask) 1359 next_scores[batch_idx, :num_beams] == beam_scores.view(batch_size, num_beams)[batch_idx] 1360 ), "If batch_idx is not done, final next scores: {} have to equal to accumulated beam_scores: {}".format( -> 1361 next_scores[:, :num_beams][batch_idx], beam_scores.view(batch_size, num_beams)[batch_idx], 1362 ) 1363 AssertionError: If batch_idx is not done, final next scores: tensor([-3.5015, -4.4583, -4.5677, -4.5808, -4.7325, -4.8779, -5.1127, -5.2106, -5.2147, -5.3292, -5.3575, -5.5227, -5.6846, -5.7290, -5.8227, -5.8522, -5.8537, -5.9930, -6.0119, -6.0440, -6.0469, -6.0496, -6.0739, -6.0959, -6.1007, -6.1125, -6.1519, -6.2492, -6.2850, -6.3033, -6.3035, -6.3677, -6.3798, -6.4438, -6.4508, -6.4511, -6.4993, -6.5149, -6.5269, -6.5330, -6.5335, -6.5662, -6.6071, -6.6222, -6.6331, -6.6437, -6.6454, -6.6660, -6.7163, -6.7690, -6.7841, -6.8139, -6.8147, -6.8157, -6.8297, -6.8297, -6.8326, -6.8448, -6.8493, -6.8763, -6.8866, -6.8871, -6.9026, -6.9046, -6.9078, -6.9475, -6.9544, -6.9654, -6.9672, -6.9735, -6.9749, -6.9795, -7.0020, -7.0024, -7.0124, -7.0310, -7.0837, -7.1405, -7.1407, -7.1420, -7.1652, -7.1675, -7.1788, -7.1800, -7.1875, -7.2146, -7.2148, -7.2294, -7.2538, -7.2765, -7.2775, -7.2859, -7.3110, -7.3120, -7.3189, -7.3403, -7.3438, -7.3479, -7.3535, -7.3592, -7.3733, -7.3740, -7.3842, -7.3848, -7.3882, -7.3889, -7.4192, -7.4192, -7.4211, -7.4657, -7.4662, -7.4774, -7.4776, -7.4862, -7.4999, -7.5150, -7.5156, -7.5187, -7.5214, -7.5253, -7.5365, -7.5376, -7.5547, -7.5566, -7.5815, -7.5922, -7.5940, -7.5960, -7.5996, -7.6115, -7.6128, -7.6406, -7.6485, -7.6619, -7.6744, -7.6752, -7.6804, -7.6875, -7.6928, -7.6979, -7.7064, -7.7172, -7.7229, -7.7429, -7.7431, -7.7482, -7.7510, -7.7576, -7.7619, -7.7641, -7.7655, -7.7721, -7.7787, -7.7792, -7.7842, -7.7859, -7.8005, -7.8083, -7.8087, -7.8128, -7.8145, -7.8184, -7.8185, -7.8197, -7.8228, -7.8275, -7.8381, -7.8478, -7.8572, -7.8598, -7.8677, -7.8712, -7.8755, -7.8820, -7.8867, -7.9135, -7.9308, -7.9485, -7.9613, -7.9629, -7.9649, -7.9706, -7.9714, -7.9739, -7.9740, -7.9757, -7.9779, -7.9809, -7.9858, -7.9964, -7.9992, -8.0047, -8.0086, -8.0105, -8.0255, -8.0500, -8.0550, -8.0604, -8.0811, -8.0858]) have to equal to accumulated beam_scores: tensor([-4.7325, -5.1127, -5.2106, -5.3292, -5.8227, -5.8522, -6.0739, -6.0959, -6.1007, -6.1125, -6.2492, -6.3033, -6.3035, -6.3798, -6.4438, -6.4508, -6.4511, -6.4993, -6.5269, -6.5330, -6.5662, -6.6331, -6.6660, -6.7163, -6.8139, -6.8157, -6.8297, -6.8297, -6.8493, -6.8763, -6.8871, -6.9026, -6.9078, -6.9475, -6.9735, -6.9749, -7.0020, -7.0024, -7.0837, -7.1405, -7.1420, -7.1652, -7.1800, -7.1875, -7.2146, -7.2294, -7.2538, -7.2765, -7.2775, -7.2859, -7.3189, -7.3403, -7.3438, -7.3535, -7.3592, -7.3733, -7.4192, -7.4192, -7.4657, -7.4662, -7.4774, -7.4999, -7.5150, -7.5156, -7.5187, -7.5253, -7.5365, -7.5547, -7.5566, -7.5815, -7.5922, -7.5940, -7.5996, -7.6115, -7.6128, -7.6406, -7.6619, -7.6744, -7.6752, -7.6804, -7.6875, -7.7172, -7.7229, -7.7429, -7.7431, -7.7482, -7.7510, -7.7619, -7.7641, -7.7655, -7.7787, -7.7792, -7.7842, -7.8083, -7.8087, -7.8145, -7.8184, -7.8185, -7.8197, -7.8228, -7.8275, -7.8478, -7.8572, -7.8598, -7.8677, -7.8712, -7.8755, -7.8820, -7.8867, -7.9613, -7.9629, -7.9706, -7.9714, -7.9739, -7.9740, -7.9757, -7.9779, -7.9809, -7.9858, -7.9964, -7.9992, -8.0047, -8.0086, -8.0255, -8.0500, -8.0550, -8.0604, -8.0811, -8.0858, -8.0862, -8.0880, -8.1197, -8.1198, -8.1261, -8.1331, -8.1337, -8.1364, -8.1509, -8.1633, -8.1681, -8.1759, -8.1843, -8.2007, -8.2019, -8.2094, -8.2223, -8.2236, -8.2504, -8.2580, -8.2653, -8.2748, -8.2778, -8.2782, -8.2860, -8.2931, -8.2932, -8.2963, -8.3047, -8.3111, -8.3161, -8.3215, -8.3255, -8.3393, -8.3439, -8.3567, -8.3608, -8.3706, -8.3810, -8.3858, -8.4023, -8.4076, -8.4111, -8.4153, -8.4198, -8.4335, -8.4365, -8.4390, -8.4397, -8.4480, -8.4511, -8.4532, -8.4594, -8.4663, -8.4681, -8.4686, -8.4720, -8.4726, -8.4778, -8.4781, -8.4786, -8.4885, -8.5027, -8.5173, -8.5181, -8.5226, -8.5298, -8.5308, -8.5341, -8.5463, -8.5466]) ``` ## Expected behavior If I remove the `eos_token_id` argument from `t5_mlm.generate(...)` (refer the given code snippet), it returns the following (expected) output: ``` ['<extra_id_0> cornerstone<extra_id_1>', '<extra_id_0> part<extra_id_1> developing', '<extra_id_0> huge part<extra_id_1>', '<extra_id_0> big part<extra_id_1>', '<extra_id_0> beautiful part<extra_id_1>', '<extra_id_0> very important part', '<extra_id_0> part<extra_id_1> larger', '<extra_id_0> unique part<extra_id_1>', '<extra_id_0> part<extra_id_1> developed', '<extra_id_0> part<extra_id_1> large', '<extra_id_0> beautiful country in', '<extra_id_0> part of the', '<extra_id_0> small part<extra_id_1>', '<extra_id_0> part<extra_id_1> great', '<extra_id_0> part<extra_id_1> bigger', '<extra_id_0> country in the', '<extra_id_0> large part<extra_id_1>', '<extra_id_0> part<extra_id_1> middle', '<extra_id_0> significant part<extra_id_1>', '<extra_id_0> part<extra_id_1> vast'] ``` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: Google Colab - Python version: 3.6 - PyTorch version (GPU?): 1.5.0+cu101 - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
05-02-2020 16:19:59
05-02-2020 16:19:59
Hi @girishponkiya, Thanks for the detailed error description. I was not able to reproduce the error with the newest version of transformers, 2.9.0. See google colab here: https://colab.research.google.com/drive/14TQzMBPuaNb6aZ_NoVFQrr_ZTJS2mCr3 . Can you check again with a newer version of transformers?<|||||>Hi @patrickvonplaten, Because of a bug in `tokenizers` (v0.7.0), the following code sets a wrong value to `eos_token_id`: ```python # I want to set <extra_id_1> as end of a sequence. eos_token_id = t5_tokenizer.encode('<extra_id_1>')[0] ``` The correct value of `eos_token_id` should be 32098 (index of <extra_id_1>). Just replace the above snippet with the following to reproduce the error: ```python # I want to set <extra_id_1> as end of a sequence. eos_token_id = 32098 # t5_tokenizer.encode('<extra_id_1>')[0] ``` PS: for more details on the tokenizer bug, you may refer #4021 <|||||>same issue with GPT-2 to generate texts using beam search, eos_token_id set to 50256(<|endoftext|>)<|||||>I had the same issue with GPT-2 and fixed it by hacking the code in `transformers/modeling_utils.py` from <pre> if eos_token_id is not None and all( (token_id % vocab_size).item() <b>is not</b> eos_token_id for token_id in next_tokens[batch_idx] ): assert torch.all( next_scores[batch_idx, :num_beams] == beam_scores.view(batch_size, num_beams)[batch_idx] ), "If batch_idx is not done, final next scores: {} have to equal to accumulated beam_scores: {}".format( next_scores[:, :num_beams][batch_idx], beam_scores.view(batch_size, num_beams)[batch_idx], ) </pre> to <pre> if eos_token_id is not None and all( (token_id % vocab_size).item() <b>!=</b> eos_token_id for token_id in next_tokens[batch_idx] ): assert torch.all( next_scores[batch_idx, :num_beams] == beam_scores.view(batch_size, num_beams)[batch_idx] ), "If batch_idx is not done, final next scores: {} have to equal to accumulated beam_scores: {}".format( next_scores[:, :num_beams][batch_idx], beam_scores.view(batch_size, num_beams)[batch_idx], ) </pre> Hope it could help.<|||||>@junyann : Thanks, that's not a hack, should be the right fix. Irrespective of whether the the issues are solely caused by this, the "is not" comparison is wrong and should be fixed. @patrickvonplaten : Thoughts? Encountered the same assertion failure today and was looking to reproduce, ended up here.
transformers
4,120
closed
fixed utils_summarization import path
As per issue #3827 I've removed the dot in the import.
05-02-2020 15:36:51
05-02-2020 15:36:51
I think you need to run `make style` and resolve merge conflicts. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,119
closed
Fix markdown to show the results table properly
05-02-2020 15:32:56
05-02-2020 15:32:56
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=h1) Report > Merging [#4119](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1cdd2ad2afb73f6af185aafecb7dd7941a90c4d1&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4119/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4119 +/- ## ========================================== - Coverage 78.85% 78.84% -0.02% ========================================== Files 114 114 Lines 18688 18688 ========================================== - Hits 14737 14734 -3 - Misses 3951 3954 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.14% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.92% <0.00%> (-0.13%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=footer). Last update [1cdd2ad...397a837](https://codecov.io/gh/huggingface/transformers/pull/4119?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,118
closed
Update run_pl_ner.py
05-02-2020 13:27:18
05-02-2020 13:27:18
Thanks @williamFalcon – do we need to bump the minimal version of `pytorch-lightning` then?<|||||>We're backward compatible, so no need to bump but 0.7.5 has forward support for native amp and solves a few key bugs that were present in earlier versions. I would recommend bumping up.
transformers
4,117
closed
Update run_pl_glue.py
05-02-2020 13:26:24
05-02-2020 13:26:24
Same question as https://github.com/huggingface/transformers/pull/4118#issuecomment-622963241
transformers
4,116
closed
Updated
05-02-2020 12:53:32
05-02-2020 12:53:32
transformers
4,115
closed
Create model card
Create Model card for distilroberta-base-finetuned-sentiment
05-02-2020 11:25:31
05-02-2020 11:25:31
Great!<|||||>You do not rest even on Saturday! 🦾
transformers
4,114
closed
model card for surajp/albert-base-sanskrit
Added Model card for `surajp/albert-base-sanskrit`
05-02-2020 10:51:39
05-02-2020 10:51:39
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=h1) Report > Merging [#4114](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/abb1fa3f374811ea09d0bc3440d820c50735008d&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4114/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4114 +/- ## ======================================= Coverage 78.85% 78.85% ======================================= Files 114 114 Lines 18688 18688 ======================================= Hits 14736 14736 Misses 3952 3952 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.14% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.04% <0.00%> (+0.12%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=footer). Last update [abb1fa3...e50935f](https://codecov.io/gh/huggingface/transformers/pull/4114?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome! And our first model for Sanskrit. [model page](https://huggingface.co/surajp/albert-base-sanskrit)
transformers
4,113
closed
[Reformer] Move model card to google model
05-02-2020 08:24:22
05-02-2020 08:24:22
transformers
4,112
closed
Albert large QA model pretrained from baidu webqa and baidu dureader datasets.
05-02-2020 03:57:16
05-02-2020 03:57:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=h1) Report > Merging [#4112](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d713cfc5ebfb1ed83de1fce55dd7279f9db30672&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4112/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4112 +/- ## ========================================== - Coverage 78.84% 78.84% -0.01% ========================================== Files 114 114 Lines 18688 18688 ========================================== - Hits 14735 14734 -1 - Misses 3953 3954 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4112/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.14% <0.00%> (-0.42%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=footer). Last update [d713cfc...a442791](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Really cool. [Model page](https://huggingface.co/wptoux/albert-chinese-large-qa)
transformers
4,111
closed
BERT as encoder and a transformer as a decoder.
Hi, there Is there probability to build a model of the BERT as the encoder and the transformer as the decoder? Thanks.
05-02-2020 00:51:24
05-02-2020 00:51:24
The encoder-decoder or bart may be what you want.<|||||>I think you should take a look into the ``encoder-decoder`` framework: https://huggingface.co/transformers/model_doc/encoderdecoder.html<|||||>Note that currently only Bert2Bert is possible.
transformers
4,110
closed
NER: parse args from .args file or JSON
Hi, thanks to @julien-c , args parsing from `.args` or json-based configuration files were introduced in #3934 into the internal argparser class. This PR adds support for it in the `run_ner.py` script. It also extends the NER documentation and shows how to use a json-based configuration with `run_ner.py`.
05-01-2020 22:57:41
05-01-2020 22:57:41
transformers
4,109
closed
Fix #2941
Reshaped score array to avoid `numpy` ValueError. This should allow the sentiment analyzer pipeline to run.
05-01-2020 21:47:55
05-01-2020 21:47:55
Uh, I guess it should be `reshape(-1, 1)` instead of `reshape(-1,1)` regarding code quality issues, but I'm not sure whether it's the correct fix.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=h1) Report > Merging [#4109](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d713cfc5ebfb1ed83de1fce55dd7279f9db30672&el=desc) will **not change** coverage. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4109/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4109 +/- ## ======================================= Coverage 78.84% 78.84% ======================================= Files 114 114 Lines 18688 18688 ======================================= Hits 14735 14735 Misses 3953 3953 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `74.94% <100.00%> (ø)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.14% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=footer). Last update [d713cfc...e12b3d5](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
4,108
closed
Feature/torchserve interface [WIP]
Work in progress PR to add a CLI interface for easy packaging of transformers models for serving in [pytorch/serve](https://github.com/pytorch/serve). Intended usage example: `transformers-cli torchserve --checkpoint="distilbert-base-uncased-finetuned-sst-2-english" --tokenizer="distilbert-base-uncased" --model-name="distilbert" --task="sentiment-analysis"` The call above produces a MAR (model archive) file that can be served directly by the `torchserve` binary.
05-01-2020 20:37:45
05-01-2020 20:37:45
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,107
closed
Fix `RobertaClassificationHead` style consistency.
There's a slight inconsistency in `RobertaClassificationHead` in that it takes in the whole sequence output from the `RobertaModel`, and extracts the pooled output inside its own forward method, seen [here](https://github.com/huggingface/transformers/blob/d713cfc5ebfb1ed83de1fce55dd7279f9db30672/src/transformers/modeling_roberta.py#L573). This is different from other models, where the pooled output is computed beforehand and directly passed on to the classifier. E.g. in [`BertForSequenceClassification`](https://github.com/huggingface/transformers/blob/d713cfc5ebfb1ed83de1fce55dd7279f9db30672/src/transformers/modeling_bert.py#L1147), [`DistilBertForSequenceClassification`](https://github.com/huggingface/transformers/blob/d713cfc5ebfb1ed83de1fce55dd7279f9db30672/src/transformers/modeling_distilbert.py#L614), [`BartForSequenceClassification`](https://github.com/huggingface/transformers/blob/d713cfc5ebfb1ed83de1fce55dd7279f9db30672/src/transformers/modeling_bart.py#L1097), etc.
05-01-2020 20:17:29
05-01-2020 20:17:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=h1) Report > Merging [#4107](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `1.21%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4107/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4107 +/- ## ========================================== - Coverage 77.83% 76.62% -1.22% ========================================== Files 141 141 Lines 24634 24634 ========================================== - Hits 19175 18876 -299 - Misses 5459 5758 +299 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.00% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.04% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.95% <0.00%> (-2.26%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.44% <0.00%> (-1.17%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.82% <0.00%> (-0.18%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=footer). Last update [58cca47...21fae42](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Creating new one with all checks passed.
transformers
4,106
closed
FIXME(Actually test multi input pipelines)
05-01-2020 19:51:15
05-01-2020 19:51:15
Closing in favor of #4154
transformers
4,105
closed
model from path 16-bits training:True but float16 false
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
05-01-2020 19:50:23
05-01-2020 19:50:23
Hi all I am gonna run "run_language_modeling.py" on 1 GPU 1080 Ti and loading distill bert from directory and using fp16 , apex it is run OK and i am getting trace back of 05/01/2020 22:24:38 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: store_true 05/01/2020 22:24:38 - INFO - transformers.configuration_utils - loading configuration file ./save_pre_trn_model/config.json 05/01/2020 22:24:38 - INFO - transformers.configuration_utils - Model config DistilBertConfig { "_num_labels": 2, "activation": "gelu", "architectures": [ "DistilBertModel" ], "attention_dropout": 0.1, "bad_words_ids": null, "bos_token_id": null, "decoder_start_token_id": null, "dim": 768, "do_sample": false, "dropout": 0.1, "early_stopping": false, "eos_token_id": null, "finetuning_task": null, "hidden_dim": 3072, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "is_decoder": false, "is_encoder_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 512, "min_length": 0, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "no_repeat_ngram_size": 0, "num_beams": 1, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pad_token_id": 0, "prefix": null, "pruned_heads": {}, "qa_dropout": 0.1, "repetition_penalty": 1.0, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "task_specific_params": null, "temperature": 1.0, "tie_weights_": true, "top_k": 50, "top_p": 1.0, "torchscript": false, "use_bfloat16": false, "vocab_size": 30000 } <|||||>Is it OK to see 16-bits training:True (i have written "store_true" in the code of it instead of True but those are doing the same thing) and "use_bfloat16": false with each other ? what does "use_bfloat16": false means ?<|||||>[As far as I can see](https://github.com/huggingface/transformers/search?q=use_bfloat16&type=Code) `bfloat16` is only relevant for Tensorflow XLNet, so no need to worry.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,104
closed
Fix pytorch lighting examples
It throws the exception 'Trainer' object has no attribute 'avg_loss' because since version 0.7.2 they removed the avg_loss field from the Trainer class. Also `get_tqdm_dict` is deprecated since 0.7.3. See https://github.com/huggingface/transformers/pull/2890#issuecomment-613066707
05-01-2020 18:09:27
05-01-2020 18:09:27
@simonepri can you point to the examples that need fixing?<|||||>https://github.com/huggingface/transformers/blob/master/examples/glue/run_pl_glue.py https://github.com/huggingface/transformers/blob/master/examples/ner/run_pl_ner.py <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,103
closed
AttributeError: 'NoneType' object has no attribute 'abs' when run example/run_bertology.py
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) example/run_bertology.py The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) GLUE mnli ## To reproduce Steps to reproduce the behavior: export TASK_NAME=mnli python ./run_bertology.py --data_dir $GLUE_DIR/$TASK_NAME --model_name bert-base-uncased --task_name $TASK_NAME --max_seq_length 128 --output_dir ./tmp/$TASK_NAME/ --try_masking <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> After 2 Iterations: Traceback (most recent call last): File "run_bertology.py", line 427, in <module> main() File "run_bertology.py", line 422, in main head_mask = mask_heads(args, model, eval_dataloader) File "run_bertology.py", line 180, in mask_heads args, model, eval_dataloader, compute_entropy=False, head_mask=new_head_mask File "run_bertology.py", line 105, in compute_heads_importance head_importance += head_mask.grad.abs().detach() AttributeError: 'NoneType' object has no attribute 'abs' I printed the 'head_mask' when the error occurs: head_mask: tensor([[1., 0., 1., 1., 1., 1., 0., 0., 1., 0., 1., 1.], [1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 0., 0.], [0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 0., 1., 0., 1., 1., 0., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 0., 1., 1., 1., 0., 0., 1.], [1., 0., 1., 1., 0., 1., 0., 1., 1., 0., 1., 1.], [1., 0., 0., 1., 0., 0., 0., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 0.]], device='cuda:0', grad_fn=<ViewBackward>) ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> At the 3 positions of tensor, it should be "requires_grad=True". head_mask: tensor([[1., 1., 1., 1., 1., 1., 0., 0., 1., 0., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 0.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1.], [1., 1., 1., 1., 1., 1., 0., 1., 1., 0., 1., 1.], [1., 0., 1., 1., 1., 0., 0., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 0.]], device='cuda:0', requires_grad=True) ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:2.8.0 - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?):1.15 - Using GPU in script?:yes - Using distributed or parallel set-up in script?:
05-01-2020 17:33:58
05-01-2020 17:33:58
I also have this error, anyone has the solution?
transformers
4,102
closed
Added huseinzol05/gpt2-345M-bahasa-cased
05-01-2020 16:15:51
05-01-2020 16:15:51
transformers
4,101
closed
Docs: add XLM-RoBERTa to multi-lingual section
Hi, this PR adds a short description of available XLM-R models to the multi-lingual documentation :)
05-01-2020 14:58:16
05-01-2020 14:58:16
transformers
4,100
closed
Masking in Bert
I am not able to grasp the concept of attention masking in bert or the other transformers. <img width="790" alt="Screenshot 2020-05-01 at 7 56 06 PM" src="https://user-images.githubusercontent.com/45225143/80812561-d37a4a00-8be5-11ea-9da6-469b0d2d3f8f.png"> According to the documentation I tried to experiment it out. Here it is clearly mentioned that the 1's in position are going to be masked and 0's are not. So i tried to experiment it out and got this result. <img width="510" alt="Screenshot 2020-05-01 at 8 00 02 PM" src="https://user-images.githubusercontent.com/45225143/80813016-d1fd5180-8be6-11ea-8fcc-c86943fc003f.png"> Here are my results the numbers on the left are 0 and right non zero. I still dont get the whole picture of how this code fits into the whole picture. TLDR In general terms, if any one could explain me attention masking in the self attention part of the code in transformers and its variants(possibly with code) it would be great.
05-01-2020 14:34:02
05-01-2020 14:34:02
I don't fully understand the question, but love the color theme :)<|||||>> I don't fully understand the question, but love the color theme :) Hey thanks😄its cobalt on vscode. Have edited the question with tldr pls let me know on it.<|||||>That being said, please don't use screenshots. They are hard to read (especially on phones) and make it impossible to copy-and-paste your code. Use [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks) instead. I also don't quite understand your question. Do you have a general question about how attention works? Please use [Stack Overflow](https://stackoverflow.com/) for this, as the template clearly mentions.<|||||>> That being said, please don't use screenshots. They are hard to read (especially on phones) and make it impossible to copy-and-paste your code. Use [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks) instead. I also don't quite understand your question. Do you have a general question about how attention works? Please use [Stack Overflow](https://stackoverflow.com/) for this, as the template clearly mentions. Sure will follow the guidelines, closing the issue.
transformers
4,099
closed
Added GePpeTto card
05-01-2020 14:07:19
05-01-2020 14:07:19
Awesome :-)
transformers
4,098
closed
[Fix #3963] GPT2 FP16
fix #3963 GPT2 failing (through run_language_modeling.py) in fp16 mode.
05-01-2020 13:00:57
05-01-2020 13:00:57
Gunna merge this at 7pm EST barring objections @thomwolf @julien-c <|||||>Pinging @thomwolf especially as he'd want to review this imo
transformers
4,097
closed
Fix gpt2 fp16
fix #3963
05-01-2020 12:58:52
05-01-2020 12:58:52
transformers
4,096
closed
Defaults models for different pipelines
Hey, Where can I find the default models that are used for the different pipelines?
05-01-2020 10:10:46
05-01-2020 10:10:46
Defaults are at https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L1459 . There is often a different default for tf and pt (pytorch).<|||||>Closing this as i think the question is resolved (also it's probably a better match for Stack Overflow)<|||||>What is the default model for the 'fill-mask' pipeline? I'm not able to tell from the previous answer in this thread. Any assistance much appreciated.<|||||>The defaults are defined [here](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L2987-L3087). The `fill-mask` pipeline uses the `distilroberta-base` checkpoint.<|||||>The above answers are out of date. The defaults are now defined in [pipelines/__init__.py](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/__init__.py) (in the values of the SUPPORTED_TASKS dictionary).
transformers
4,095
closed
Fix object is not subscriptable error in BertEncoder (#1188)
Fix object is not subscriptable error in BertEncoder when head mask is None. Issue #1188 describes problem. BertLayer accepts head_mask as None, however if BertEncoder gets head_mask as None - it tries to index None.
05-01-2020 09:56:33
05-01-2020 09:56:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=h1) Report > Merging [#4095](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b8686174be75220d2c26a961597a39ef4921b616&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `50.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4095/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4095 +/- ## ======================================= Coverage 78.84% 78.84% ======================================= Files 114 114 Lines 18691 18693 +2 ======================================= + Hits 14737 14739 +2 Misses 3954 3954 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.68% <50.00%> (-0.15%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.90% <0.00%> (+0.34%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.55% <0.00%> (+0.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=footer). Last update [b868617...d506143](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Indeed :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
4,094
closed
Negative dimension when initialising the XLNetModel
# 🐛 Bug ## Information XLNetModel: ## To reproduce Steps to reproduce the behavior: ``` import torch from pytorch_transformers import * # PyTorch-Transformers has a unified API # for 7 transformer architectures and 30 pretrained weights. # Model | Tokenizer | Pretrained weights shortcut MODELS = [(XLNetModel, XLNetTokenizer, 'xlnet-base-cased')] # Let's encode some text in a sequence of hidden-states using each model: for model_class, tokenizer_class, pretrained_weights in MODELS: # Load pretrained model/tokenizer tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained(pretrained_weights) ``` with the code above I got following errors: ``` Traceback (most recent call last): File "/Users/xx/xxx/xxx/test.py", line 424, in <module> model = model_class.from_pretrained(pretrained_weights) File "/Users/yxu132/pyflow3.6/lib/python3.6/site-packages/pytorch_transformers/modeling_utils.py", line 536, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/Users/yxu132/pyflow3.6/lib/python3.6/site-packages/pytorch_transformers/modeling_xlnet.py", line 731, in __init__ self.word_embedding = nn.Embedding(config.n_token, config.d_model) File "/Users/xx/pyflow3.6/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 97, in __init__ self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim)) RuntimeError: Trying to create tensor with negative dimension -1: [-1, 768] ``` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 1.2.0 - Platform: MacOS - Python version: Python3.6 - PyTorch version (GPU?): Torch 1.5 (with and without GPU)
05-01-2020 09:36:18
05-01-2020 09:36:18
Hi, I can't reproduce this on `master`, could you upgrade your library to the latest version and let me know if you face the same issue?<|||||>I think torch-1.5.0 and pytorch-transformers-1.2.0 are the latest versions, no? I upgraded to python3.7 and tried the above again and still get the same issue. <|||||>That's because `pytorch-transformers` became `transformers` in September!<|||||>Thanks! It works now. Sorry for asking such a dum question... <|||||>No worries, gald you could make it work!<|||||>Cool! Problem solved. Will close this issue. Nice weekend!<|||||>> That's because `pytorch-transformers` became `transformers` in September! what does that mean? Sorry, I didn't get your point and I have the same issue.<|||||>> > That's because `pytorch-transformers` became `transformers` in September! > > what does that mean? Sorry, I didn't get your point and I have the same issue. Instead of install and use pytorch-transformers, install and use transformers. For examples, > from transformers import *
transformers
4,093
closed
Fix overwrite_cache behaviour for pytorch lightning examples
cc: @nateraw Ref: #3290
05-01-2020 08:44:19
05-01-2020 08:44:19
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=h1) Report > Merging [#4093](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b8686174be75220d2c26a961597a39ef4921b616&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4093/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #4093 +/- ## ======================================= Coverage 78.84% 78.85% ======================================= Files 114 114 Lines 18691 18691 ======================================= + Hits 14737 14738 +1 + Misses 3954 3953 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.90% <0.00%> (+0.34%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=footer). Last update [b868617...d0f7228](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Not sure when it got broken, but this looks ok at first glance to me. I'll pull it down tonight as a sanity check to run it. Thank you! <|||||>LGTM too 😄 Thank you again, @simonepri