repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
5,100
closed
Update Conda Release
When trying to get started with transformers using some examples from the model cards, the installation with conda results in an outdated version (2.1.1). As a result, the example cannot run. However, when using pip to install transformers the newest version (2.11) is correctly used. I would suggest updating the conda release in order to avoid installing via pip.
06-18-2020 08:32:56
06-18-2020 08:32:56
This might provide more information: https://github.com/conda-forge/transformers-feedstock/issues/3 In short, the conda-forge update is blocked by `sentencepiece` not being available through conda.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>The feedstock has now been updated for Linux and is available. Other OSes are still waiting on sentencepiece.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This should be fixed by #8073 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,099
closed
Fix TF WarmUp class
This commit fixes the TF WarmUp learning rate scheduler. The LR shape was wrong due to warmup steps. See linked issue for more details. Fix #5098
06-18-2020 05:16:29
06-18-2020 05:16:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=h1) Report > Merging [#5099](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efeb75b8054cc299698cf8bc09f395ada2660745&el=desc) will **increase** coverage by `0.04%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5099/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5099 +/- ## ========================================== + Coverage 77.24% 77.29% +0.04% ========================================== Files 133 133 Lines 22134 22134 ========================================== + Hits 17097 17108 +11 + Misses 5037 5026 -11 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <ø> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (+1.24%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=footer). Last update [efeb75b...d99033f](https://codecov.io/gh/huggingface/transformers/pull/5099?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks @Colanim! Good catch!! Unfortunately it is a duplicate of #4940 :smile: and should be merged soon :)<|||||>Haha didn't see x)
transformers
5,098
closed
🐛 [TF] `create_optimizer` wrong superposition of learning rate schedules
# 🐛 Bug When using `create_optimizer`, 2 learning rate schedulers are placed on top of each other (WarmUp and keras Polynomial Decay) : https://github.com/huggingface/transformers/blob/efeb75b8054cc299698cf8bc09f395ada2660745/src/transformers/optimization_tf.py#L70-L80 But the step in Warmup scheduler is not updated to take into account the warmup steps, which lead to a wrong learning rate : ![image](https://user-images.githubusercontent.com/43774355/84980216-69edd500-b16c-11ea-97f6-44642bdc64ce.png) --- Expected learning rate shape : ![Dessin sans titre](https://user-images.githubusercontent.com/43774355/84981152-d2d64c80-b16e-11ea-9092-ccc50217fc66.png)
06-18-2020 05:08:37
06-18-2020 05:08:37
transformers
5,097
closed
Training the BERTSUM model
Hi, I find that the script can only predict the summaries using the BERTSUM model. Is it possible to train the BERTSUM model using this script?
06-18-2020 05:02:07
06-18-2020 05:02:07
Hi, as seen with @sshleifer, you can train for summarization using the [summarization script](https://github.com/huggingface/transformers/tree/master/examples/summarization). The models supported right now are all BART variants and t5-small. More to come!
transformers
5,096
closed
Can I training a bart model from scratch by transformers?
Can I training a bart model from scratch by transformers?
06-18-2020 04:46:37
06-18-2020 04:46:37
Yes<|||||>> Yes That' s awesome!Can you give a code to show? I'm grateful!<|||||>So from the paper: https://arxiv.org/pdf/1910.13461.pdf, you can see that Bart is trained on denoising input sequences in almost any possible way. One way could be for `BartForConditionalGeneration`: ```python from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig tok = BartTokenizer.from_pretrained("facebook/bart-large") model = BartForConditionalGeneration(BartConfig()) input_string = "My dog is <mask> </s>" decoder_input_string = "<s> My dog is cute" labels_string = "My dog is cute </s>" input_ids = tok(input_string, add_special_tokens=False, return_tensors="pt").input_ids decoder_input_ids =tok(decoder_input_string, add_special_tokens=False, return_tensors="pt").input_ids labels = tok(labels_string, add_special_tokens=False, return_tensors="pt").input_ids loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0] ```<|||||>Pinging @sshleifer to make sure I did not forget anything<|||||>> Pinging @sshleifer to make sure I did not forget anything Actually, I was going to ask. how train a model from zero to one. For example, I want to train a Chinese bart model.<|||||>Here's a working example for this, including batching: ``` from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig tok = BartTokenizer.from_pretrained("facebook/bart-large") model = BartForConditionalGeneration(BartConfig()) input_batch = ["My dog is <mask></s>", "It loves to play in the <mask></s>"] decoder_input_batch = ["<s>My dog is cute", "<s>It loves to play in the park"] labels_batch = ["My dog is cute</s>", "It loves to play in the park</s>"] input_ids = tok.batch_encode_plus(input_batch, add_special_tokens=False, return_tensors="pt", padding=True).input_ids decoder_input_ids = tok.batch_encode_plus(decoder_input_batch, add_special_tokens=False, return_tensors="pt", padding=True).input_ids labels = tok.batch_encode_plus(labels_batch, add_special_tokens=False, return_tensors="pt", padding=True).input_ids loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0] ``` `>>>` `tensor(10.9981, device='cuda:0', grad_fn=<NllLossBackward>)`<|||||>> Here's a working example for this, including batching: > > ``` > from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig > > tok = BartTokenizer.from_pretrained("facebook/bart-large") > model = BartForConditionalGeneration(BartConfig()) > > input_batch = ["My dog is <mask></s>", "It loves to play in the <mask></s>"] > decoder_input_batch = ["<s>My dog is cute", "<s>It loves to play in the park"] > labels_batch = ["My dog is cute</s>", "It loves to play in the park</s>"] > > input_ids = tok.batch_encode_plus(input_batch, add_special_tokens=False, return_tensors="pt", padding=True).input_ids > decoder_input_ids = tok.batch_encode_plus(decoder_input_batch, add_special_tokens=False, return_tensors="pt", padding=True).input_ids > labels = tok.batch_encode_plus(labels_batch, add_special_tokens=False, return_tensors="pt", padding=True).input_ids > > loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0] > ``` > > `>>>` `tensor(10.9981, device='cuda:0', grad_fn=<NllLossBackward>)` input_batch = ["My dog is <mask></s>", "It loves to play in the <mask></s>"] decoder_input_batch = ["<s>My dog is cute", "<s>It loves to play in the park"] labels_batch = ["My dog is cute</s>", "It loves to play in the park</s>"] If I have a text document, each line of a paragraph, how do I rewrite the data input on it? Thanks!<|||||>@tomhosking the paper indicates that it uses both sentence permutation (loss is propagated from all tokens instead of only masked tokens) and infilling (include only one mask token for multiple consecutive masks). would this be a correct input? input_batch = ["\<s>It is \<mask\> retriever. My dog is \<mask\>\</s>", "\<s>There \<mask\> in SF. It loves to play in the \<mask\>\</s>"] decoder_input_batch = ["\</s>\<s>My dog is cute. It is a golden retriever", "\</s>\<s>It loves to play in the park. There are many parks in SF."] labels_batch = ["\<s>My dog is cute. It is a golden retriever\</s>", "\<s>It loves to play in the park. There are many parks in SF.\</s>"] (Note: decoder_input_batch starts with \</s>\<s> due to shift_tokens_right #7961)<|||||>Sorry for the intrusion, but I think your values are almost correct @swethmandava, except for the masking absence ```python input_batch = ["<s>It <mask> retriever. My <mask> cute </s>", ... ] decoder_input_batch = ["</s><s>My dog is cute. It is a golden retriever", ...] labels_batch = ["<s>My dog is cute. It is a golden retriever</s>", ...] ``` BTW: This `</s>` token at the beginning of decode's input is kind of weird to me, but it's inherited from the fairseq original code. If you wanna train the model from scratch with random weights I think you can go without this... or maybe this trick is important for convergence, we never know :grin:<|||||>Will only 15% mask in the encoder input cause some kind of leakage? The language model in the decoder cannot learn correctly<|||||>If anyone wants to train their MBART model then feel free to use this. https://github.com/prajdabre/yanmtt Contributions are welcome!<|||||>> Sorry for the intrusion, but I think your values are almost correct @swethmandava, except for the masking absence > > ```python > input_batch = ["<s>It <mask> retriever. My <mask> cute </s>", ... ] > decoder_input_batch = ["</s><s>My dog is cute. It is a golden retriever", ...] > labels_batch = ["<s>My dog is cute. It is a golden retriever</s>", ...] > ``` > > BTW: This `</s>` token at the beginning of decode's input is kind of weird to me, but it's inherited from the fairseq original code. If you wanna train the model from scratch with random weights I think you can go without this... or maybe this trick is important for convergence, we never know 😁 I have a non-natural language dataset where I haven't actually been including `<s>` and `</s>` since they don't add any value (and need to be removed later anyway). To work with that, should I insert a pad token at the start of the `decoder_input` representation (and truncate to max_length)?<|||||>> So from the paper: https://arxiv.org/pdf/1910.13461.pdf, you can see that Bart is trained on denoising input sequences in almost any possible way. > > One way could be for `BartForConditionalGeneration`: > > ```python > from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig > > tok = BartTokenizer.from_pretrained("facebook/bart-large") > model = BartForConditionalGeneration(BartConfig()) > > input_string = "My dog is <mask> </s>" > decoder_input_string = "<s> My dog is cute" > labels_string = "My dog is cute </s>" > > input_ids = tok(input_string, add_special_tokens=False, return_tensors="pt").input_ids > decoder_input_ids =tok(decoder_input_string, add_special_tokens=False, return_tensors="pt").input_ids > labels = tok(labels_string, add_special_tokens=False, return_tensors="pt").input_ids > > loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0] > ``` Hi, do you have a script to build the training dataset of BART pertain, thanks<|||||>@patrickvonplaten @sshleifer Did anyone ever come around to creating a notebook/script for BART pretraining? (In a linked issue you mentioned it was on the to-do list.) The core difficulty is having a canonical implementation for the data preprocessing (BART is more than just token masking, I believe: e.g.,span masking, shuffling). But a full pretrain pipeline here or in fairseq is also sorely missing. <|||||>Sadly not :-/ We now have on for Flax in #18297 - could you try to copy-paste the preprocessing logic into a PyTorch one maybe? <|||||>@patrickvonplaten I've been porting the fairseq implementation to a PyTorch dataloader format. I found that the Flax implementation in HF lacks adding noise for 0-length spans and has some slightly diverging implementation so it was more straightforward to start from the fairseq implementation. I am now especially testing the data processing to get it as close as possible to fairseq's implementation (although it is my believe that [there's a bug in their code](https://github.com/facebookresearch/fairseq/issues/4695)). I would like to add a full pytorch example for DLM training of BART in the coming days/weeks but I could use some code reviews in doing that to feel more comfortable. Would that be possible?<|||||>Sure, happy to take a look! <|||||>Hi I remember posting this a year ago but I've written an entire toolkit for this purpose. Feel free to use it. https://github.com/prajdabre/yanmtt I've also created a simple notebook for the same (scroll to the pretraining part): https://colab.research.google.com/drive/1ovlA_h0ggblawqR-yCgRs3uRjxFJ8K0l?usp=sharing <|||||>Hi Raj, thank you for this. I had come across it but your script seems to have a lot of additional things going on so that it is hard to extract the basics. I also found that you implement word/span masking but not the other things like adding noise or randomly swap a masked token for a random token, so not _completely_ like the original implementation (but correct me if I'm wrong!) . I think your library can be very useful to be used as a separate library, thanks! In addition I'll try add a PR in `transformers` for an succinct example to use within transformers with the `Trainer`, with data processing close the `fairseq` implementation.<|||||>Hi, My focus was more on mbart and mt5 which looked only at span masking and reordering. I'm not sure if token replacement will have that big of an impact but can be easily implemented in 1 line. To my understanding, span masking is responsible for majority of the gains. The notebook contains a more watered down version of the masking method in my toolkit. You could consider that version and build on top of it easily.<|||||>Hey guys, I would want to know how to pre-training BART model from scratch. Anyone who know about this? BART, pegasus or other text summarization models are okay for me.
transformers
5,095
closed
Addition of VisualBERT
# 🌟 New model addition ## Model description The VisualBERT model is used for multi-modal processing when the modes of images and text are present. It takes in object detection features from images, and combines them with textual embeddings from the pre-trained BERT models, pre-trained the whole thing on COCO image captioning data, using a similar MLM task as BERT. It has been shown to work well on several multi-modal tasks such as VQA, VCR, NLVR, etc. <!-- Important information --> ## Open source status The source code presented along with the paper can be found at https://github.com/uclanlp/visualbert * [x] the model implementation is available: (give details) The model implementation can be found on the GitHub repository, in the models section: https://github.com/uclanlp/visualbert/tree/master/models This code was provided along with the paper. Another implementation, which is slightly harder to understand because of complex dependencies, is implemented in the Facebook Research's MMF framework: https://github.com/facebookresearch/mmf/blob/master/mmf/models/visual_bert.py * [x] the model weights are available: (give details) The model checkpoints that the authors used are presented as drive links in the given repository, depending on which pre-training we want. There are several links on the README file of the GitHub repository. * [x] who are the authors: (mention them, if possible by @gh-username) - Kai-Wei Chang: @KaiWeiChang - Liunian Harold Li: @liunian-harold-li - Mark Yatskar - Da Yin - Cho-Jui Hsieh - Kai-Wei Chang I want to contribute the model myself, please let me know if this is the right avenue for this, and how I can contribute.
06-17-2020 21:59:28
06-17-2020 21:59:28
This is very interesting!<|||||>This has been proposed before as a separate issue but no action was taken. Hence, I thought I'll start implementing some of the multi-modal models one by one.<|||||>Please let @liunian-harold-li and me know if you need any help. We can also provide the pre-trained models. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,094
closed
Different output from model on CPU and GPU
# 🐛 Bug I trained models using the `run_glue.py` script and uploaded them to the model hub. I've realized that their output slightly differs between when I do inference on the CPU and the GPU. Absolute errors are small – on the order of `1e-7` – but that turns out to be too much for my use case. ## Information Model I am using (Bert, XLNet ...): BERT Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: MNLI * [ ] my own task or dataset: (give details below) ## To reproduce ```python (torch) qcuda8 04:47 PM > python Python 3.7.7 (default, Mar 26 2020, 15:48:22) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> model_cpu = transformers.AutoModelForSequenceClassification.from_pretrained('textattack/bert-base-uncased-MNLI') >>> model_gpu = transformers.AutoModelForSequenceClassification.from_pretrained('textattack/bert-base-uncased-MNLI').to('cuda') >>> >>> premise = "Among these are the red brick Royal Palace, which now houses the Patan Museum (Nepal's finest and most modern museum), and, facing the palace across the narrow brick plaza, eight temples of different styles and sizes." >>> hypothesis = "The Patan Museum is down the street from the red brick Royal Palace." >>> >>> tokenizer = transformers.AutoTokenizer.from_pretrained('textattack/bert-base-uncased-MNLI') >>> encoded_text = tokenizer.encode_plus((premise, hypothesis), return_tensors='pt') >>> encoded_text['input_ids'] tensor([[101, 100, 100, 102]]) >>> encoded_text_cuda = {k: v.cuda() for k,v in encoded_text.items()} >>> model_gpu(**encoded_text_cuda)[0].squeeze().tolist() [-1.0867613554000854, 0.6688923239707947, 0.30274006724357605] >>> model_cpu(**encoded_text)[0].squeeze().tolist() [-1.0867608785629272, 0.6688917279243469, 0.3027404248714447] ``` ## Expected behavior I want the model output between the CPU model and GPU model to be the same. ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-3.10.0-693.el7.x86_64-x86_64-with-centos-7.4.1708-Core - Python version: 3.7.7 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
06-17-2020 21:03:53
06-17-2020 21:03:53
Hi! I don't think it's possible to have two different hardware perform *exactly* the same. A 1e-7 precision is already very high. Out of curiosity, why do you need such a high precision? Nevertheless, this is more of a pytorch-specific question rather than transformers-related. I'll close the issue here but feel free to link to an issue on the PyTorch forums if you do open one there!<|||||>@LysandreJik I haven't seen this issue with models not trained with transformers (though I probably just haven't looked hard enough). Can you give me more info on why this is the case, and maybe point me to some relevant resources? As to your second question-- we need a high precision because we're searching for adversarial examples that maximize model misprediction in our library [TextAttack](https://github.com/QData/TextAttack). This precision error caused a different search outcome with a CPU and GPU. So, for some pair of sequences $a$ and $b$, the model on the CPU predicted $a$ 'more correctly' than it predicted $b$, and the model on the GPU predicted $b$ more correctly than $a$. Our automated tests caught this issue. Do you have a suggestion on how to fix it? Should we just truncate model scores to 7 decimal places? That feels like a crude fix.
transformers
5,093
closed
[style] add pandas to setup.cfg
Need to add pandas to setup.cfg. Otherwise, for people that have pandas installed locally isort tries to change `eli5_utils.py` everytime.
06-17-2020 20:33:02
06-17-2020 20:33:02
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=h1) Report > Merging [#5093](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90c833870c78bb3d5d807a9a3e6a40d24bf2302b&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5093/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5093 +/- ## ======================================= Coverage 77.28% 77.28% ======================================= Files 133 133 Lines 22134 22134 ======================================= Hits 17107 17107 Misses 5027 5027 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=footer). Last update [90c8338...eae4841](https://codecov.io/gh/huggingface/transformers/pull/5093?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,092
closed
[MarianTokenizer] Switch to sacremoses for punc normalization
Attempt at fixing #4491 using @jpcorb20 's solution
06-17-2020 20:22:50
06-17-2020 20:22:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=h1) Report > Merging [#5092](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20fa82898495f516b221115fc3ef9ec8ebf50b1e&el=desc) will **increase** coverage by `0.07%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5092/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5092 +/- ## ========================================== + Coverage 77.21% 77.29% +0.07% ========================================== Files 133 133 Lines 22134 22134 ========================================== + Hits 17091 17108 +17 + Misses 5043 5026 -17 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5092/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.75% <100.00%> (+0.89%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5092/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5092/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5092/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=footer). Last update [20fa828...e30eaf5](https://codecov.io/gh/huggingface/transformers/pull/5092?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,091
closed
encode_plus wrongly tokenizing a symbol
When I used a the bio dataset (linnaeus-IOB) to train bert on, the dataset includes the symbol (>=) [greater than or equal], the tokenizer is separating them into two tokens **Expected output is either**: `>` `##=` **or** `>= `
06-17-2020 20:14:07
06-17-2020 20:14:07
What tokenizer are you using? Is it one already trained? If so, which? If not, on what did you train your tokenizer? What is your code? What is your output? If possible, please fill the template. It's much easier to help you if you provide all the information we need.<|||||>1. Trained using a pretrained biobert for NER task: `tokenizer = BertTokenizer.from_pretrained("monologg/biobert_v1.0_pubmed_pmc")` 2. encode_plus method `encoded_dict = tokenizer.encode_plus( sent_str, # Sentence to encode. add_special_tokens = True, max_length = 75, pad_to_max_length = True, return_attention_mask = True, return_tensors = 'pt', )` 3. Dataset of > linnaeus-IOB: includes in one line `>= label O` 3. Current output the line of `>=` is split into two symbols without hashes I get `> label O` `= label O`<|||||>Same for the bio dataset BC4CHEMD-IOBES, it has (R), and the tokenizer split them into three tokens without hashes<|||||>Right, I fail to understand why you think this is wrongly tokenized? This tokenizer does not have the token `>=` in its vocabulary. You can check with: ```py ">" in tokenizer.get_vocab() # Returns True ">=" in tokenizer.get_vocab() # returns False ```<|||||>@LysandreJik yes sir, thank you But I'm trying to understand why `encode_plus` did not add hashes `##` before `=` while tokenization I was expecting to see `>` `##=` so they would relate to the same token but that was not the case Is my question clear? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,090
closed
minor spelling correction in script execution command - movement pruning
actual script name - counts_parameters.py
06-17-2020 20:06:43
06-17-2020 20:06:43
transformers
5,089
closed
Is there a helper script to preprocess data for T5 for masked language modeling?
Hi Team Thanks for the wonderful HuggingFace library ! I am now working with T5 on my own dataset. I want to know if there is any helper script that can automatically take text and mask a random set of tokens and also generate the expected output sequence for the pretraining unsupervised language modeling task.
06-17-2020 19:56:48
06-17-2020 19:56:48
Not yet sadly - it's on my ToDo list. Hope to be able to work on it soon<|||||>I am working on a script for T5 based upon the current run_language_modeling.py, maybe I can share that once I am done and someone can confirm if it works as expected?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi, I'm working in the same task. [Here](https://github.com/huggingface/transformers/issues/7451) you can see my code if it helps!
transformers
5,088
closed
Image GPT
# 🌟 New model addition ## Model description OpenAI just announced Image GPT: https://openai.com/blog/image-gpt/ Although image rendering would be out of scope for Transformers, the RGB generation would still be in scope and it would be best to port the weights to a `GPT2LMModel`. However, it's not immediately clear here how the tokenization is implemented in the downloaded model. (no separate `vocab.json`) ## Open source status * [ ] the model implementation is available: https://github.com/openai/image-gpt * [ ] the model weights are available: see README above * [ ] who are the authors: @openai
06-17-2020 19:10:43
06-17-2020 19:10:43
I'd like a google colab of it <|||||>Hey @minimaxir! Here's a [colab](https://colab.research.google.com/github/apeguero1/image-gpt/blob/master/Transformers_Image_GPT.ipynb) which loads the weights into a subclass of `GPT2LMHeadModel` and demonstrates unconditional image generation and conditional image completion. Some differences I've found between Image-GPT and GPT2 which are reflected in the subclass. 1) Image-GPT layer normalization doesn't subtract off the mean 2) different activations used in the MLP 3) In Image-GPT, the input and output embeddings are not tied 4) Image-GPT has an extra learned "sos" token embedding which is concatenated at the beginning of the sequence 5) The GPT2 `[n_embd, 3*n_embd]` dimensional linear layer, `c_attn`, which produces queries, keys, and values is instead split into 3 separate linear layers each with dimension `[n_head, n_embd/n_head, n_embd]` in Image-GPT (this only affects how to load the weights and not the actual model). 6) In Image-GPT, the `conv1d` module doesn't have a bias term So what's our next step to add this to the repo?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@apeguero1 we have an "Adding a new model" checklist at https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,087
closed
Why does the T5Tokenizer prepend and '_' to every token?
I am using the T5Tokenizer as follows: ``` In [119]: tokenizer = T5Tokenizer.from_pretrained('t5-small') In [120]: input_ids = tokenizer.encode_plus('I love my dog', return_tensors='pt') In [121]: [tokenizer.convert_ids_to_tokens([ele]) for ele in input_ids['input_ids'][0]] Out[121]: [['▁I'], ['▁love'], ['▁my'], ['▁dog']] ``` I even tried another one as follows: ``` In [129]: [tokenizer.convert_ids_to_tokens([ele]) for ele in input_ids['input_ids'][0]] Out[129]: [['▁I'], ['▁love'], ['▁my'], ['▁school'], ['▁National'], ['x'], ['y'], ['z']] ``` Above, shouldn't 'x', 'y', and 'z' have # prepended to them as they are part of the same word? Why do I see an underscore before every token? Is this related to how it is sent inside T5?
06-17-2020 18:25:36
06-17-2020 18:25:36
Hi, the T5 tokenizer is a SentencePiece tokenizer, and that's the way SentencePiece works. This underscore means that it's the start of a word. When it's not the start of a word, it's not prepended by anything. I think you're thinking of the # symbol because you're used to the BertTokenizer (wordpiece). SentencePiece works a bit differently!
transformers
5,086
closed
SummarizationPipeline: init required task name
Otherwise, can't do: ```python tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") model = AutoModelWithLMHead.from_pretrained("facebook/bart-large-cnn") p = SummarizationPipeline(model=model, tokenizer=tokenizer) p("Long boring text to summarize, etc. etc.") ```
06-17-2020 18:01:45
06-17-2020 18:01:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=h1) Report > Merging [#5086](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/70bc3ead4f0b08e8cadd1805ada2a22f0c302399&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5086/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5086 +/- ## ======================================= Coverage 77.26% 77.27% ======================================= Files 133 133 Lines 22146 22149 +3 ======================================= + Hits 17111 17115 +4 + Misses 5035 5034 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.41% <100.00%> (+0.13%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=footer). Last update [70bc3ea...ca3ad69](https://codecov.io/gh/huggingface/transformers/pull/5086?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I think all Pipelines have this problem, not just the `summarization` pipeline, no? The only time the `task` string is ever really used is in the `class Pipeline(_ScikitCompat):` to call the correct task specific config parameters. I would actually propose to change the `task` string logic here a bit so that we give every pipeline class a static string variable `task` and don't pass it in the init. For summarization, e.g.: ```python class SummarizationPipeline(Pipeline): task = "summarization" def __call__(...): ... ``` Then in the `Pipeline` class we could have the following logic: ```python class Pipeline: task = None def __init__(...): ... assert self.task in SUPPORTED_TASKS.values(), f"{self.task} does not exist" ``` IMO, "task" is not really part of the instantiated object, but more of the class itself. Also, I think a model should always only have one default configuration per task. I think T5 is already an exception in that it can handle multiple task and I don't really see why T5 for example would need two different default configs for summarization. We can always overwrite the config parameters when calling the model, so I don't think we restrict ourselves too much with this design and having multiple default configs in the config file would quickly make them unreadable. What do you think @julien-c ?<|||||>I'm not following everything here @patrickvonplaten :) Merging this like that for now but feel free to refine in the future
transformers
5,085
closed
Add missing arg in 02-transformers notebook
Add missing `from_tf=True` arg when creating the model `AutoModel.from_pretrained()` to avoid an OSError from loading a PyTorch model from a TF 2.0 checkpoint. Also fixed two small typos in markdown text.
06-17-2020 15:38:29
06-17-2020 15:38:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=h1) Report > Merging [#5085](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7291ea0bff57a017e71b1ea8ec01ff19da298bf0&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5085/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5085 +/- ## ========================================== - Coverage 77.24% 77.22% -0.02% ========================================== Files 133 133 Lines 22146 22146 ========================================== - Hits 17107 17103 -4 - Misses 5039 5043 +4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.78%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (-0.12%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.81% <0.00%> (+0.19%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=footer). Last update [7291ea0...0092c55](https://codecov.io/gh/huggingface/transformers/pull/5085?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi, I agree with the typo change, but there's no need for the `from_tf` flag. `bert-base-cased` is available in both TensorFlow and PyTorch, so there's no need for the flag.<|||||>@LysandreJik you're right! I was getting an error unless I explicitly passed `from_tf=True` yesterday, but not today. Since I can't reproduce it, I'll remove that change.
transformers
5,084
closed
Update installation page and add contributing to the doc
This PR simplifies the installation page, adds the mention of TF/PT installation with one command only (for CPU) and adds a test that the installation was successful. The tests mention is moved to CONTRIBUTING. The mention of the tokenization process for OpenAI GPT is moved to that model doc page. It also adds the contributing guide to the documentation with a simlink (add to fix a few links to make it work).
06-17-2020 14:55:47
06-17-2020 14:55:47
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=h1) Report > Merging [#5084](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7291ea0bff57a017e71b1ea8ec01ff19da298bf0&el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5084/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5084 +/- ## ========================================== + Coverage 77.24% 77.26% +0.02% ========================================== Files 133 133 Lines 22146 22146 ========================================== + Hits 17107 17112 +5 + Misses 5039 5034 -5 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5084/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (-0.12%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5084/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5084/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.77%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=footer). Last update [7291ea0...e9e1ea6](https://codecov.io/gh/huggingface/transformers/pull/5084?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,083
closed
updated hans eval instructions
I've updated the info regarding how hans evaluation can be carried out. I've also renamed `run_hans.py` to `test_hans.py` to restore the previous file convention and also indicate that HANS only supports evaluation.
06-17-2020 14:08:39
06-17-2020 14:08:39
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=h1) Report > Merging [#5083](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7291ea0bff57a017e71b1ea8ec01ff19da298bf0&el=desc) will **decrease** coverage by `0.39%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5083/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5083 +/- ## ========================================== - Coverage 77.24% 76.84% -0.40% ========================================== Files 133 133 Lines 22146 22146 ========================================== - Hits 17107 17019 -88 - Misses 5039 5127 +88 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `56.09% <0.00%> (-19.76%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-0.94%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5083/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (-0.12%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=footer). Last update [7291ea0...b485684](https://codecov.io/gh/huggingface/transformers/pull/5083?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,082
closed
Add header and fix command
This finishes to fix #4742
06-17-2020 14:05:37
06-17-2020 14:05:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=h1) Report > Merging [#5082](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7291ea0bff57a017e71b1ea8ec01ff19da298bf0&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5082/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5082 +/- ## ======================================= Coverage 77.24% 77.25% ======================================= Files 133 133 Lines 22146 22146 ======================================= + Hits 17107 17108 +1 + Misses 5039 5038 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (-0.12%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.95% <0.00%> (+0.31%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=footer). Last update [7291ea0...cc54022](https://codecov.io/gh/huggingface/transformers/pull/5082?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>LGTM
transformers
5,081
closed
01_how-to-train.ipynb broken
# 🐛 Bug ## To reproduce Steps to reproduce the behavior: 1. Go to https://github.com/huggingface/transformers/tree/master/examples 2. Click the colab for `language-modeling`: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb 3. Run notebook <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The notebook finishes succesfuly What I get is: ``` --------------------------------------------------------------------------- Exception Traceback (most recent call last) <ipython-input-5-52625a7c86e5> in <module>() 1 get_ipython().system('mkdir EsperBERTo') ----> 2 tokenizer.save("EsperBERTo") /usr/local/lib/python3.6/dist-packages/tokenizers/implementations/base_tokenizer.py in save(self, path, pretty) 330 A path to the destination Tokenizer file 331 """ --> 332 return self._tokenizer.save(path, pretty) 333 334 def to_str(self, pretty: bool = False): Exception: Is a directory (os error 21) ``` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: NA - Using distributed or parallel set-up in script?: NA
06-17-2020 12:54:37
06-17-2020 12:54:37
@orestisfl thanks for raising this, I also was scratching my head as the same tokenizer.save("EsperBERTo") worked for me a few days ago and not anymore. I could save the tokenizer using tokenizer.save("EsperBERTo/vocab.txt"), but then I can't load it. If I try to load I get: ``` TypeError: sep_token not found in the vocabulary ``` I'm using BertWordPieceTokenizer (not ByteLevelBPETokenizer used in your example) It worked with tokenizers version 0.7.0, I just checked - I got version 0.8.0rc1 currently installed. I'll downgrade to 0.7.0 for now. <|||||>Was this BC intended @n1t0?<|||||>Yes, `tokenizers` `0.8.0` introduces the full tokenizer serialization, whereas before it saved the "model" only (vocab.json + merges.txt for BPE). So the save method should be used like that: `.save("tokenizer.json")` and it saves the entire tokenizer to a JSON file. We need to update the Notebook to use this new serialization method, but in the meantime, the only thing needed to make it work exactly like before is to replace: ```python !mkdir EsperBERTo tokenizer.save("EsperBERTo") ``` by ```python !mkdir EsperBERTo tokenizer.save_model("EsperBERTo") ```<|||||>mind updating it before we forget? Thanks!<|||||>Sure, updated it with the quick change I mentioned. Will do a better update later.<|||||>Hey there, thanks for the quick fix! The notebook now crashes for me during training, however: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-19-0c647bc3a8b8> in <module>() ----> 1 get_ipython().run_cell_magic('time', '', 'trainer.train()') 11 frames <decorator-gen-60> in time(self, line, cell, local_ns) <timed eval> in <module>() /usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py in <listcomp>(.0) 112 probability_matrix = torch.full(labels.shape, self.mlm_probability) 113 special_tokens_mask = [ --> 114 self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist() 115 ] 116 probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0) AttributeError: 'RobertaTokenizerFast' object has no attribute 'get_special_tokens_mask' ``` Let me know if I should make a separate issue <|||||>This one is for me (this method was actually not working as intended under the hood for Fast-tokenizers...)<|||||>@thomwolf - just to confirm, I tried the change you made and it fixes a problem for me. Thanks! ``` AttributeError: 'BertTokenizerFast' object has no attribute 'get_special_tokens_mask' ```
transformers
5,080
closed
[docs] fix T5 training doc
This PR fixes T5 training doc. In a recent commit `lm_labels` is changed to `labels`. Made the doc changes accordingly. Regarding issue #5079 @patrickvonplaten
06-17-2020 11:28:41
06-17-2020 11:28:41
Great thanks @patil-suraj <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=h1) Report > Merging [#5080](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ebab096e864a619717a497089d864d10e21bc536&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5080/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5080 +/- ## ======================================= Coverage 77.26% 77.27% ======================================= Files 128 128 Lines 21854 21854 ======================================= + Hits 16886 16887 +1 + Misses 4968 4967 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5080/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5080/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5080/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=footer). Last update [ebab096...2f8a93c](https://codecov.io/gh/huggingface/transformers/pull/5080?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,079
closed
How do I pre-train the T5 model in HuggingFace library using my own text corpus?
Hello, I understand how the T5 architecture works and I have my own large corpus where I decide to mask a sequence of tokens and replace them with sentinel tokens. I also understand about the tokenizers in HuggingFace, specially the T5 tokenizer. Can someone point me to a document or refer me to the class that I need to use to pretrain T5 model on my corpus using the masked language model approach? Thanks
06-17-2020 10:41:14
06-17-2020 10:41:14
Hi, @abhisheknovoic this might help you https://huggingface.co/transformers/model_doc/t5.html#training check the Unsupervised denoising training section<|||||>@patil-suraj , do you mean this class? - T5ForConditionalGeneration Also, at the top of the page, there is the following code: ```input_ids = tokenizer.encode('The <extra_id_1> walks in <extra_id_2> park', return_tensors='pt') lm_labels = tokenizer.encode('<extra_id_1> cute dog <extra_id_2> the <extra_id_3> </s>', return_tensors='pt') # the forward function automatically creates the correct decoder_input_ids model(input_ids=input_ids, lm_labels=lm_labels) ``` Any idea which class is the model instantiated from? I could not find any class with lm_labels parameter. Thanks<|||||>Yes, it's `T5ForConditionalGeneration`, and `lm_lables` is now changed to `labels`. Pinging @patrickvonplaten for more details.<|||||>@patil-suraj , I tried the following code which throws an error. Any idea why? Thanks ```In [32]: from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config In [32]: from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config In [33]: input_ids = tokenizer.encode('The <extra_id_1> walks in <extra_id_2> park', return_tensors='pt') In [34]: labels = tokenizer.encode('<extra_id_1> cute dog <extra_id_2> the <extra_id_3> </s>', return_tensors='pt') In [35]: config = T5Config() In [36]: model = T5ForConditionalGeneration(config=config) In [37]: model(input_ids=input_ids, lm_labels=labels) --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-37-6717b0ecfbf5> in <module> ----> 1 model(input_ids=input_ids, lm_labels=labels) /usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) /usr/local/lib/python3.7/site-packages/transformers/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, decoder_past_key_value_states, use_cache, lm_labels, inputs_embeds, decoder_inputs_embeds, head_mask) 1068 if lm_labels is not None and decoder_input_ids is None and decoder_inputs_embeds is None: 1069 # get decoder inputs from shifting lm labels to the right -> 1070 decoder_input_ids = self._shift_right(lm_labels) 1071 1072 # If decoding with past key value states, only the last tokens /usr/local/lib/python3.7/site-packages/transformers/modeling_t5.py in _shift_right(self, input_ids) 609 assert ( 610 decoder_start_token_id is not None --> 611 ), "self.model.config.decoder_start_token_id has to be defined. In T5 it is usually set to the pad_token_id. See T5 docs for more information" 612 613 # shift inputs to the right AssertionError: self.model.config.decoder_start_token_id has to be defined. In T5 it is usually set to the pad_token_id. See T5 docs for more information ``` My versions are ``` transformers==2.11.0 tokenizers==0.7.0 ```<|||||>If you are using 2.11.0 then use `lm_labels` and if you are using master then use `labels`<|||||>@patil-suraj , thanks. I have installed the master version. It still complains with the same error. It seems like I need to specify something for the decoder_start_token_id. <|||||>Ok, I got it working. I initialized config like follows: ``` config = T5Config(decoder_start_token_id=tokenizer.convert_tokens_to_ids(['<pad>'])[0]) ```<|||||>@patil-suraj , however, if we use the master branch, it seems like the tokenizers are broken. The T5 tokenizer doesn't tokenize the sentinel tokens correctly.<|||||>> @patil-suraj , do you mean this class? - T5ForConditionalGeneration > > Also, at the top of the page, there is the following code: > > ``` > lm_labels = tokenizer.encode('<extra_id_1> cute dog <extra_id_2> the <extra_id_3> </s>', return_tensors='pt') > # the forward function automatically creates the correct decoder_input_ids > model(input_ids=input_ids, lm_labels=lm_labels) > ``` > > Any idea which class is the model instantiated from? I could not find any class with lm_labels parameter. > > Thanks Feel free to also open a PR to correct `lm_labels` to `labels` in the comment :-) <|||||>Just saw that @patil-suraj already did this - awesome thanks :-) @abhisheknovoic regarding the T5 tokenizer, can you post some code here that shows that T5 tokenization is broken (would be great if we can easily reproduce the error)<|||||>@patrickvonplaten it would be nice if we also add seq-2-seq (t5, bart) model pre-training examples in official examples cc @sshleifer <|||||>Definitely!<|||||>Not sure if this should be a separate issue or not, but I am having difficulty training my own T5 tokenizer. When training a BPE tokenizer using the amazing huggingface tokenizer library and attempting to load it via ```python tokenizer = T5Tokenizer.from_pretrained('./tokenizer') ``` I get the following error: ``` OSError: Model name './tokenizer/' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed './tokenizer/' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url. ``` I attempted to train a sentencepiece model instead using the, again amazing, huggingface tokenizer library, I get the same error because the `tokenizer.save` method does not actual generate the `spiece.model` file. Am I doing something wrong? Tranformers version: 2.11.0 Tokenizers version: 0.7.0 Here is a colab to reproduce the error: https://colab.research.google.com/drive/1WX1Q2Ze9k0SxFMLLv1aFgVGBFMEVTyDe?usp=sharing<|||||>@mfuntowicz @n1t0 - maybe you can help here<|||||>> Definitely! The pre-training scripts would really help.original mesh transformer is very complicated to understand.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>We've released [nanoT5](https://github.com/PiotrNawrot/nanoT5) that reproduces T5-model (similar to BART) pre-training in PyTorch (not Flax). You can take a look! Any suggestions are more than welcome.
transformers
5,078
closed
Add BERT Loses Patience (Patience-based Early Exit)
Add BERT Loses Patience (Patience-based Early Exit) based on the paper https://arxiv.org/abs/2006.04152 and the official implementation https://github.com/JetRunner/PABEE It's impossible to make PABEE's ALBERT and BERT compatible with the standard API (e.g., `run_glue.py` so I keep the modeling files in a separate directory, under `example/bert_loses_patience`, instead of putting them alongside `modeling_bert.py`)
06-17-2020 10:07:35
06-17-2020 10:07:35
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=h1) Report > Merging [#5078](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e4aaa4580515446cd5a2972ab42fec0b95819c84&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5078/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5078 +/- ## ======================================= Coverage 77.26% 77.26% ======================================= Files 133 133 Lines 22146 22146 ======================================= + Hits 17110 17111 +1 + Misses 5036 5035 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=footer). Last update [e4aaa45...8e0cf02](https://codecov.io/gh/huggingface/transformers/pull/5078?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi @LysandreJik , my worry is that PABEE is not a standard inference method. It cannot deal with batch inference. Also, it can only support classification & regression (no tagging, no summarization, etc.). From another aspect, as a researcher, when I try to do a little tweak with ALBERT, I won't be happy if someone adds some new stuff into the model and it'll add unnecessary burdens to the researchers. They'll definitely hate it. So IMO, it's better to implement them separately. Also, as @sshleifer suggested, I refactored the model to inherit `AlbertTransformer` and `AlbertModel`, etc.<|||||>> Hi @LysandreJik , my worry is that PABEE is not a standard inference method. It cannot deal with batch inference. Also, it can only support classification & regression (no tagging, no summarization, etc.). > > From another aspect, as a researcher, when I try to do a little tweak with ALBERT, I won't be happy if someone adds some new stuff into the model and it'll add unnecessary burdens to the researchers. They'll definitely hate it. So IMO, it's better to implement them separately. Also, as @sshleifer suggested, I refactored the model to inherit `AlbertTransformer` and `AlbertModel`, etc. Since I refactored the code with inheritance, I figure it is okay to use `adaptive_forward` since I won't have to overwrite the standard `forward` (which would be confusing since it's hard to tell which part is modified for the users). Also, it's better to preserve the standard `forward` so we can easily compare `adaptive_forward` to the standard `forward`. @sshleifer I copy most stuff from the original ALBERT & BERT modeling code so I think it also does not make sense if I refactor the parts I copied. Re. trainer, maybe we can refactor the code later? I think it's quite optional here but it requires a lot of changes on `run_glue_with_pabee.py`. Some part of me wonders if it is worthy.<|||||>The code you copied has gotten cleaned up since you copied it, hence the suggestions.<|||||>> LGTM pending suggestions, test. Okay, I’ll add a test
transformers
5,077
closed
Several problems with named entites predicted with the ner pipeline
# 🐛 Bug ## Information Hello, I am using the `bert-base-cased` model to predict named entities for a bunch of sentences (around 29 900). I am facing 3 main issues : 1. Residual '##' in grouped entities' word field (So they are not well grouped) 2. [UNK] (or [CLS]) tokens inside word fields 3. Missing syllables in the word fields Model I am using (Bert, XLNet ...): Bert (`dbmdz/bert-large-cased-finetuned-conll03-english`) Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: NER with my own unlabelled dataset ## To reproduce I didn't find the official example for this so I made my own script with the `TokenClassificationPipeline` : ```Python import torch from transformers import AutoModelForTokenClassification, AutoTokenizer from transformers import TokenClassificationPipeline model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english") tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") nlp_not_grouped = TokenClassificationPipeline( model=model, tokenizer=tokenizer, grouped_entities=False ) nlp_grouped = TokenClassificationPipeline( model=model, tokenizer=tokenizer, grouped_entities=True ) seq1 = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \ "close to the Manhattan Bridge." seq2 = "In addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification ." seq3 = "Product sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % "\ "on a reported basis and 10 . 4 % on a like - for - like basis ." seq4 = "To prepare as best as possible the decisions falling under its responsibilities , Faurecia ’ s Board of"\ " Directors has set up three committees : c Audit Committee ; c Strategy Committee ; c Appointments and Compensation"\ " Committee ." sequences = [seq1, seq2, seq3, seq4] for i, seq in enumerate(sequences): ngrouped, grouped = nlp_not_grouped(seq), nlp_grouped(seq) print(f"===================== sentence n°{i+1}") print("---Sentence---") print(seq) print("---Not grouped entities---") for ngent in ngrouped: print(ngent) print("---Grouped entities---") for gent in grouped: print(gent) ``` I have about 29 900 sentences. For each sentence I want to predict all the named entities in it and then locate them in the sentence. Once I have an entity, I use a regex to find it in the original sentence (before the tokenization step) like this : ```Python start, stop = re.search(re.escape(ent['word']), sent).span() ``` Where `ent['word']` is the text of an entity found in a sentence. For instance, it can be `"London"` for the sentence (sent) `"London is really a great city"`. However I do this later with the grouped entities but since there are errors in it many are discarded because `re.search()` raises an exception (that I catch). Steps to reproduce the behavior: You just have to run my script to predict the entities for the four sentences. Here is what I get : ```Python ===================== sentence n°1 ---Sentence--- Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore veryclose to the Manhattan Bridge. ---Not grouped entities--- {'word': 'Hu', 'score': 0.9995108246803284, 'entity': 'I-ORG', 'index': 1} {'word': '##gging', 'score': 0.989597499370575, 'entity': 'I-ORG', 'index': 2} {'word': 'Face', 'score': 0.9979704022407532, 'entity': 'I-ORG', 'index': 3} {'word': 'Inc', 'score': 0.9993758797645569, 'entity': 'I-ORG', 'index': 4} {'word': 'New', 'score': 0.9993405938148499, 'entity': 'I-LOC', 'index': 11} {'word': 'York', 'score': 0.9991927742958069, 'entity': 'I-LOC', 'index': 12} {'word': 'City', 'score': 0.9993411302566528, 'entity': 'I-LOC', 'index': 13} {'word': 'D', 'score': 0.986336350440979, 'entity': 'I-LOC', 'index': 19} {'word': '##UM', 'score': 0.9396238923072815, 'entity': 'I-LOC', 'index': 20} {'word': '##BO', 'score': 0.9121386408805847, 'entity': 'I-LOC', 'index': 21} {'word': 'Manhattan', 'score': 0.9839190244674683, 'entity': 'I-LOC', 'index': 29} {'word': 'Bridge', 'score': 0.9924242496490479, 'entity': 'I-LOC', 'index': 30} ---Grouped entities--- {'entity_group': 'I-ORG', 'score': 0.9966136515140533, 'word': 'Hugging Face Inc'} {'entity_group': 'I-LOC', 'score': 0.9992914994557699, 'word': 'New York City'} {'entity_group': 'I-LOC', 'score': 0.9460329612096151, 'word': 'DUMBO'} {'entity_group': 'I-LOC', 'score': 0.9881716370582581, 'word': 'Manhattan Bridge'} ===================== sentence n°2 ---Sentence--- In addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification . ---Not grouped entities--- {'word': 'B', 'score': 0.9997261762619019, 'entity': 'I-ORG', 'index': 5} {'word': '##la', 'score': 0.997683048248291, 'entity': 'I-ORG', 'index': 6} {'word': '##bla', 'score': 0.99888014793396, 'entity': 'I-ORG', 'index': 7} {'word': 'Group', 'score': 0.9992784261703491, 'entity': 'I-ORG', 'index': 8} {'word': 'ISO', 'score': 0.9711909890174866, 'entity': 'I-MISC', 'index': 14} {'word': 'T', 'score': 0.6591967344284058, 'entity': 'I-ORG', 'index': 16} {'word': '##S', 'score': 0.658642053604126, 'entity': 'I-MISC', 'index': 17} {'word': '##16', 'score': 0.5059574842453003, 'entity': 'I-MISC', 'index': 18} {'word': '##9', 'score': 0.5067382454872131, 'entity': 'I-MISC', 'index': 21} ---Grouped entities--- {'entity_group': 'I-ORG', 'score': 0.9988919496536255, 'word': 'Blabla Group'} {'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO'} {'entity_group': 'I-ORG', 'score': 0.6591967344284058, 'word': 'T'} {'entity_group': 'I-MISC', 'score': 0.5822997689247131, 'word': '##S16'} ===================== sentence n°3 ---Sentence--- Product sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % on a reported basis and 10 . 4 % on a like - for - like basis . ---Not grouped entities--- {'word': 'PS', 'score': 0.9970256686210632, 'entity': 'I-ORG', 'index': 5} {'word': '##A', 'score': 0.9927457571029663, 'entity': 'I-ORG', 'index': 6} {'word': 'P', 'score': 0.9980151653289795, 'entity': 'I-ORG', 'index': 7} {'word': '##eu', 'score': 0.9897757768630981, 'entity': 'I-ORG', 'index': 8} {'word': '##ge', 'score': 0.996147871017456, 'entity': 'I-ORG', 'index': 9} {'word': '##ot', 'score': 0.9928787350654602, 'entity': 'I-ORG', 'index': 10} {'word': '[UNK]', 'score': 0.5744695067405701, 'entity': 'I-ORG', 'index': 11} ---Grouped entities--- {'entity_group': 'I-ORG', 'score': 0.934436925819942, 'word': 'PSA Peugeot [UNK]'} ===================== sentence n°4 ---Sentence--- To prepare as best as possible the decisions falling under its responsibilities , Faurecia ’ s Board of Directors has set up three committees : c Audit Committee ; c Strategy Committee ; c Appointments and Compensation Committee . ---Not grouped entities--- {'word': 'F', 'score': 0.9983997941017151, 'entity': 'I-ORG', 'index': 14} {'word': '##au', 'score': 0.9473735690116882, 'entity': 'I-ORG', 'index': 15} {'word': '##re', 'score': 0.9604568481445312, 'entity': 'I-ORG', 'index': 16} {'word': '##cia', 'score': 0.992807149887085, 'entity': 'I-ORG', 'index': 17} {'word': 'Board', 'score': 0.8452167510986328, 'entity': 'I-ORG', 'index': 20} {'word': 'of', 'score': 0.5921975374221802, 'entity': 'I-ORG', 'index': 21} {'word': 'Directors', 'score': 0.6778028607368469, 'entity': 'I-ORG', 'index': 22} {'word': 'Audi', 'score': 0.9764850735664368, 'entity': 'I-ORG', 'index': 30} {'word': '##t', 'score': 0.9692177772521973, 'entity': 'I-ORG', 'index': 31} {'word': 'Committee', 'score': 0.9959701299667358, 'entity': 'I-ORG', 'index': 32} {'word': 'Strategy', 'score': 0.9705951809883118, 'entity': 'I-ORG', 'index': 35} {'word': 'Committee', 'score': 0.994032621383667, 'entity': 'I-ORG', 'index': 36} {'word': 'A', 'score': 0.9764854907989502, 'entity': 'I-ORG', 'index': 39} {'word': '##oint', 'score': 0.7803319692611694, 'entity': 'I-ORG', 'index': 41} {'word': '##ments', 'score': 0.7828453779220581, 'entity': 'I-ORG', 'index': 42} {'word': 'and', 'score': 0.9625542163848877, 'entity': 'I-ORG', 'index': 43} {'word': 'Co', 'score': 0.9904180765151978, 'entity': 'I-ORG', 'index': 44} {'word': '##mp', 'score': 0.9140805602073669, 'entity': 'I-ORG', 'index': 45} {'word': '##ens', 'score': 0.8661588430404663, 'entity': 'I-ORG', 'index': 46} {'word': '##ation', 'score': 0.9150537252426147, 'entity': 'I-ORG', 'index': 47} {'word': 'Committee', 'score': 0.9888517260551453, 'entity': 'I-ORG', 'index': 48} ---Grouped entities--- {'entity_group': 'I-ORG', 'score': 0.9747593402862549, 'word': 'Faurecia'} {'entity_group': 'I-ORG', 'score': 0.7050723830858866, 'word': 'Board of Directors'} {'entity_group': 'I-ORG', 'score': 0.9805576602617899, 'word': 'Audit Committee'} {'entity_group': 'I-ORG', 'score': 0.9823139011859894, 'word': 'Strategy Committee'} {'entity_group': 'I-ORG', 'score': 0.9764854907989502, 'word': 'A'} {'entity_group': 'I-ORG', 'score': 0.9000368118286133, 'word': '##ointments and Compensation Committee'} ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> For the first sentence (seq1) everything is fine. It's the example of the NER section under Usage section of the documentation : https://huggingface.co/transformers/usage.html#named-entity-recognition With the other sentences we can see one example of each problem : ### Residual '##' in word pieces ```Python {'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO'} {'entity_group': 'I-ORG', 'score': 0.6591967344284058, 'word': 'T'} {'entity_group': 'I-MISC', 'score': 0.5822997689247131, 'word': '##S16'} ``` In seq 2, there is `'##S16'` as a word. Obviously, it should have been grouped with the precending entity and form `TS16` even maybe `'ISO / TS16949'` like this : ```Python {'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO / TS16949'} ``` ### [UNK] tokens in the `word` field ```Python {'entity_group': 'I-ORG', 'score': 0.934436925819942, 'word': 'PSA Peugeot [UNK]'} ``` Because maybe of the ugly written Citroën which stands for Citroën. The entity found is `'PSA Peugeot [UNK]'`. In this case it would be better to just put `'PSA Peugeot'` if the last token is identified as [UNK] : ```Python {'entity_group': 'I-ORG', 'score': 0.934436925819942, 'word': 'PSA Peugeot'} ``` ### Syllables lost For the last sentence we can see that 'Appointments and Compensation Committee' as be splitted into : ```Python {'entity_group': 'I-ORG', 'score': 0.9764854907989502, 'word': 'A'} {'entity_group': 'I-ORG', 'score': 0.9000368118286133, 'word': '##ointments and Compensation Committee'} ``` instead of : ```Python {'entity_group': 'I-ORG', 'score': 0.9000368118286133, 'word': 'Appointments and Compensation Committee'} ``` The entity is not well grouped but more importantly the 'pp' is missing so even if we decided to blend the two groups we wouldn't get the real entity. This problem was first raised here : #4816. I've actually encountered this problem trying to fix the first one : I noticed some entity grouped like this, miss some syllables. The pipeline with `grouped_entity=False` already lost the 'pp' : ```Python {'word': 'A', 'score': 0.9764854907989502, 'entity': 'I-ORG', 'index': 39} {'word': '##oint', 'score': 0.7803319692611694, 'entity': 'I-ORG', 'index': 41} {'word': '##ments', 'score': 0.7828453779220581, 'entity': 'I-ORG', 'index': 42} ``` It seems the way the pipeline blends each tokens is not ok because when I predict the label for each tokens with the code example of the documentation, I get this : `[('[CLS]', 'O'), ('To', 'O'), ('prepare', 'O'), ('as', 'O'), ('best', 'O'), ('as', 'I-ORG'), ('possible', 'I-ORG'), ('the', 'I-ORG'), ('decisions', 'I-ORG'), ('falling', 'I-ORG'), ('under', 'I-ORG'), ('its', 'I-ORG'), ('responsibilities', 'O'), (',', 'O'), ('F', 'O'), ('##au', 'O'), ('##re', 'O'), ('##cia', 'O'), ('[UNK]', 'O'), ('s', 'O'), ('Board', 'O'), ('of', 'O'), ('Directors', 'O'), ('has', 'O'), ('set', 'O'), ('up', 'O'), ('three', 'O'), ('committees', 'O'), (':', 'O'), ('c', 'O'), ('Audi', 'O'), ('##t', 'O'), ('Committee', 'O'), (';', 'O'), ('c', 'O'), ('Strategy', 'O'), ('Committee', 'O'), (';', 'O'), ('c', 'O'), ('A', 'O'), ('##pp', 'O'), ('##oint', 'O'), ('##ments', 'O'), ('and', 'O'), ('Co', 'O'), ('##mp', 'O'), ('##ens', 'O'), ('##ation', 'O'), ('Committee', 'O'), ('.', 'O'), ('[SEP]', 'O')]` There are those tokens : `('A', 'O'), ('##pp', 'O'), ('##oint', 'O'), ('##ments', 'O')` for 'Appointments' ## Environment info - `transformers` version: 2.11.0 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.7.6 - PyTorch version (GPU?): 1.5.0+cpu (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: False - Using distributed or parallel set-up in script?: False EDIT : Typos
06-17-2020 09:39:01
06-17-2020 09:39:01
@Nighthyst thanks for summarizing these issues as I also ran into them. I was digging on this last weekend and I think maybe this could help: https://github.com/huggingface/tokenizers/pull/200 >> Provide some more mappings on the Encoding in order to easily identify words after tokenization. >> It also exposes a method encode_tokenized on the BaseTokenizer to allow skipping the usual Normalizer and PreTokenizer. This is especially useful for NER like datasets, where the pre-tokenization has already been done, and we want to attribute labels to pre-tokenized words. <|||||>Thanks for bringing this up. I can work on this on a separate PR after merging the PR that resolves the prior issue #4816.<|||||>some interesting finding: Using a fast tokenizer solves the `[UNK]` issue. using one of your provided examples: ```python model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english") tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True) nlp = TokenClassificationPipeline(model=model, tokenizer=tokenizer, grouped_entities=False) t="Product sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % on a reported basis and 10 . 4 % on a like - for - like basis ." nlp(t) ``` ``` [{'word': 'PS', 'score': 0.9961145520210266, 'entity': 'I-ORG', 'index': 5}, {'word': '##A', 'score': 0.9905584454536438, 'entity': 'I-ORG', 'index': 6}, {'word': 'P', 'score': 0.997616708278656, 'entity': 'I-ORG', 'index': 7}, {'word': '##eu', 'score': 0.9741767644882202, 'entity': 'I-ORG', 'index': 8}, {'word': '##ge', 'score': 0.9928027391433716, 'entity': 'I-ORG', 'index': 9}, {'word': '##ot', 'score': 0.9900722503662109, 'entity': 'I-ORG', 'index': 10}, {'word': 'C', 'score': 0.9574489593505859, 'entity': 'I-ORG', 'index': 11}, {'word': '##it', 'score': 0.824583113193512, 'entity': 'I-ORG', 'index': 12}, {'word': '##ro', 'score': 0.7597800493240356, 'entity': 'I-ORG', 'index': 13}, {'word': '##A', 'score': 0.953075647354126, 'entity': 'I-ORG', 'index': 14}, {'word': '«', 'score': 0.6135829091072083, 'entity': 'I-ORG', 'index': 15}] ```<|||||>@Nighthyst @dav009 Can you guys check if the above issues still persist after the recent PR merged (#4987)?<|||||>Hello @enzoampil, I updated transformers with master, with the command: `pip install --upgrade git+https://github.com/huggingface/transformers.git` Then I tried your tests and mine: ```Python from transformers import pipeline NER_MODEL = "mrm8488/bert-spanish-cased-finetuned-ner" nlp_ner = pipeline("ner", model=NER_MODEL, grouped_entities=True, tokenizer=(NER_MODEL, {"use_fast": False})) t = """Consuelo Araújo Noguera, ministra de cultura del presidente Andrés Pastrana (1998.2002) fue asesinada por las Farc luego de haber permanecido secuestrada por algunos meses.""" nlp_ner(t) ``` I have the expected output : ``` [{'entity_group': 'B-PER', 'score': 0.9710702555520194, 'word': 'Consuelo Araújo Noguera'}, {'entity_group': 'B-PER', 'score': 0.9997273534536362, 'word': 'Andrés Pastrana'}, {'entity_group': 'B-ORG', 'score': 0.8589079678058624, 'word': 'Farc'}] ``` And for your other test : ```Python nlp = pipeline('ner', grouped_entities=False) nlp("Enzo works at the the UN") ``` Output : ``` [{'word': 'En', 'score': 0.9968166351318359, 'entity': 'I-PER', 'index': 1}, {'word': '##zo', 'score': 0.9957635998725891, 'entity': 'I-PER', 'index': 2}, {'word': 'UN', 'score': 0.9986497163772583, 'entity': 'I-ORG', 'index': 7}] ``` And, ```Python nlp2 = pipeline('ner', grouped_entities=True) nlp2("Enzo works at the the UN") ``` Output : ``` {'entity_group': 'I-PER', 'score': 0.9962901175022125, 'word': 'Enzo'}, {'entity_group': 'I-ORG', 'score': 0.9986497163772583, 'word': 'UN'}] ``` However with my test : ```Python import torch from transformers import AutoModelForTokenClassification, AutoTokenizer from transformers import TokenClassificationPipeline model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english") tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") nlp_not_grouped = TokenClassificationPipeline( model=model, tokenizer=tokenizer, grouped_entities=False ) nlp_grouped = TokenClassificationPipeline( model=model, tokenizer=tokenizer, grouped_entities=True ) seq1 = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \ "close to the Manhattan Bridge." seq2 = "In addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification ." seq3 = "Product sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % "\ "on a reported basis and 10 . 4 % on a like - for - like basis ." seq4 = "To prepare as best as possible the decisions falling under its responsibilities , Faurecia ’ s Board of"\ " Directors has set up three committees : c Audit Committee ; c Strategy Committee ; c Appointments and Compensation"\ " Committee ." sequences = [seq1, seq2, seq3, seq4] for i, seq in enumerate(sequences): ngrouped, grouped = nlp_not_grouped(seq), nlp_grouped(seq) print(f"===================== sentence n°{i+1}") print("---Sentence---") print(seq) print("---Not grouped entities---") for ngent in ngrouped: print(ngent) print("---Grouped entities---") for gent in grouped: print(gent) ``` I have this : ``` ===================== sentence n°1 ---Sentence--- Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore veryclose to the Manhattan Bridge. ---Not grouped entities--- {'word': 'Hu', 'score': 0.9995108246803284, 'entity': 'I-ORG', 'index': 1} {'word': '##gging', 'score': 0.989597499370575, 'entity': 'I-ORG', 'index': 2} {'word': 'Face', 'score': 0.9979704022407532, 'entity': 'I-ORG', 'index': 3} {'word': 'Inc', 'score': 0.9993758797645569, 'entity': 'I-ORG', 'index': 4} {'word': 'New', 'score': 0.9993405938148499, 'entity': 'I-LOC', 'index': 11} {'word': 'York', 'score': 0.9991927742958069, 'entity': 'I-LOC', 'index': 12} {'word': 'City', 'score': 0.9993411302566528, 'entity': 'I-LOC', 'index': 13} {'word': 'D', 'score': 0.986336350440979, 'entity': 'I-LOC', 'index': 19} {'word': '##UM', 'score': 0.9396238923072815, 'entity': 'I-LOC', 'index': 20} {'word': '##BO', 'score': 0.9121386408805847, 'entity': 'I-LOC', 'index': 21} {'word': 'Manhattan', 'score': 0.9839190244674683, 'entity': 'I-LOC', 'index': 29} {'word': 'Bridge', 'score': 0.9924242496490479, 'entity': 'I-LOC', 'index': 30} ---Grouped entities--- {'entity_group': 'I-ORG', 'score': 0.9966136515140533, 'word': 'Hugging Face Inc'} {'entity_group': 'I-LOC', 'score': 0.9992914994557699, 'word': 'New York City'} {'entity_group': 'I-LOC', 'score': 0.9460329612096151, 'word': 'DUMBO'} {'entity_group': 'I-LOC', 'score': 0.9881716370582581, 'word': 'Manhattan Bridge'} ===================== sentence n°2 ---Sentence--- In addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification . ---Not grouped entities--- {'word': 'B', 'score': 0.9997261762619019, 'entity': 'I-ORG', 'index': 5} {'word': '##la', 'score': 0.997683048248291, 'entity': 'I-ORG', 'index': 6} {'word': '##bla', 'score': 0.99888014793396, 'entity': 'I-ORG', 'index': 7} {'word': 'Group', 'score': 0.9992784261703491, 'entity': 'I-ORG', 'index': 8} {'word': 'ISO', 'score': 0.9711909890174866, 'entity': 'I-MISC', 'index': 14} {'word': 'T', 'score': 0.6591967344284058, 'entity': 'I-ORG', 'index': 16} {'word': '##S', 'score': 0.658642053604126, 'entity': 'I-MISC', 'index': 17} {'word': '##16', 'score': 0.5059574842453003, 'entity': 'I-MISC', 'index': 18} {'word': '##9', 'score': 0.5067382454872131, 'entity': 'I-MISC', 'index': 21} ---Grouped entities--- {'entity_group': 'I-ORG', 'score': 0.9988919496536255, 'word': 'Blabla Group'} {'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO'} {'entity_group': 'I-ORG', 'score': 0.6591967344284058, 'word': 'T'} {'entity_group': 'I-MISC', 'score': 0.5822997689247131, 'word': '##S16'} {'entity_group': 'I-MISC', 'score': 0.5067382454872131, 'word': '##9'} ===================== sentence n°3 ---Sentence--- Product sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % on a reported basis and 10 . 4 % on a like - for - like basis . ---Not grouped entities--- {'word': 'PS', 'score': 0.9970256686210632, 'entity': 'I-ORG', 'index': 5} {'word': '##A', 'score': 0.9927457571029663, 'entity': 'I-ORG', 'index': 6} {'word': 'P', 'score': 0.9980151653289795, 'entity': 'I-ORG', 'index': 7} {'word': '##eu', 'score': 0.9897757768630981, 'entity': 'I-ORG', 'index': 8} {'word': '##ge', 'score': 0.996147871017456, 'entity': 'I-ORG', 'index': 9} {'word': '##ot', 'score': 0.9928787350654602, 'entity': 'I-ORG', 'index': 10} {'word': '[UNK]', 'score': 0.5744695067405701, 'entity': 'I-ORG', 'index': 11} ---Grouped entities--- {'entity_group': 'I-ORG', 'score': 0.934436925819942, 'word': 'PSA Peugeot [UNK]'} ===================== sentence n°4 ---Sentence--- To prepare as best as possible the decisions falling under its responsibilities , Faurecia ’ s Board of Directors has set up three committees : c Audit Committee ; c Strategy Committee ; c Appointments and Compensation Committee . ---Not grouped entities--- {'word': 'F', 'score': 0.9983997941017151, 'entity': 'I-ORG', 'index': 14} {'word': '##au', 'score': 0.9473735690116882, 'entity': 'I-ORG', 'index': 15} {'word': '##re', 'score': 0.9604568481445312, 'entity': 'I-ORG', 'index': 16} {'word': '##cia', 'score': 0.992807149887085, 'entity': 'I-ORG', 'index': 17} {'word': 'Board', 'score': 0.8452167510986328, 'entity': 'I-ORG', 'index': 20} {'word': 'of', 'score': 0.5921975374221802, 'entity': 'I-ORG', 'index': 21} {'word': 'Directors', 'score': 0.6778028607368469, 'entity': 'I-ORG', 'index': 22} {'word': 'Audi', 'score': 0.9764850735664368, 'entity': 'I-ORG', 'index': 30} {'word': '##t', 'score': 0.9692177772521973, 'entity': 'I-ORG', 'index': 31} {'word': 'Committee', 'score': 0.9959701299667358, 'entity': 'I-ORG', 'index': 32} {'word': 'Strategy', 'score': 0.9705951809883118, 'entity': 'I-ORG', 'index': 35} {'word': 'Committee', 'score': 0.994032621383667, 'entity': 'I-ORG', 'index': 36} {'word': 'A', 'score': 0.9764854907989502, 'entity': 'I-ORG', 'index': 39} {'word': '##oint', 'score': 0.7803319692611694, 'entity': 'I-ORG', 'index': 41} {'word': '##ments', 'score': 0.7828453779220581, 'entity': 'I-ORG', 'index': 42} {'word': 'and', 'score': 0.9625542163848877, 'entity': 'I-ORG', 'index': 43} {'word': 'Co', 'score': 0.9904180765151978, 'entity': 'I-ORG', 'index': 44} {'word': '##mp', 'score': 0.9140805602073669, 'entity': 'I-ORG', 'index': 45} {'word': '##ens', 'score': 0.8661588430404663, 'entity': 'I-ORG', 'index': 46} {'word': '##ation', 'score': 0.9150537252426147, 'entity': 'I-ORG', 'index': 47} {'word': 'Committee', 'score': 0.9888517260551453, 'entity': 'I-ORG', 'index': 48} ---Grouped entities--- {'entity_group': 'I-ORG', 'score': 0.9747593402862549, 'word': 'Faurecia'} {'entity_group': 'I-ORG', 'score': 0.7050723830858866, 'word': 'Board of Directors'} {'entity_group': 'I-ORG', 'score': 0.9805576602617899, 'word': 'Audit Committee'} {'entity_group': 'I-ORG', 'score': 0.9823139011859894, 'word': 'Strategy Committee'} {'entity_group': 'I-ORG', 'score': 0.9764854907989502, 'word': 'A'} {'entity_group': 'I-ORG', 'score': 0.9000368118286133, 'word': '##ointments and Compensation Committee'} ``` It seems like the problem is still here for sentence n°4 : the last group should be "Appointments and Compensation Committee". For sentence n°2 it should be : "TS16949" as MISC or ORG at least it predicts the T in ORG and the other part in MISC. Even if both parts don't have the same entity tag, the ORG part should have been in one group "S16949" at least I think. Also @dav009 "trick" to solve the [UNK] issue seems to not be working anymore : ```Python model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english") tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True) nlp = TokenClassificationPipeline(model=model, tokenizer=tokenizer, grouped_entities=False) t="Product sales to the PSA Peugeot Citroën group totaled € 1 , 893 . 6 million in 2012 , down 8 . 1 % on a reported basis and 10 . 4 % on a like - for - like basis ." nlp(t) ``` Output : ``` [{'word': 'PS', 'score': 0.9970256686210632, 'entity': 'I-ORG', 'index': 5}, {'word': '##A', 'score': 0.9927457571029663, 'entity': 'I-ORG', 'index': 6}, {'word': 'P', 'score': 0.9980151653289795, 'entity': 'I-ORG', 'index': 7}, {'word': '##eu', 'score': 0.9897757768630981, 'entity': 'I-ORG', 'index': 8}, {'word': '##ge', 'score': 0.996147871017456, 'entity': 'I-ORG', 'index': 9}, {'word': '##ot', 'score': 0.9928787350654602, 'entity': 'I-ORG', 'index': 10}, {'word': '[UNK]', 'score': 0.5744695067405701, 'entity': 'I-ORG', 'index': 11}] ``` The [UNK] token is back<|||||>For sentence 4, this is because the ##pp in “Appointments”, is not being tagged as an entity. This will require a separate PR that assumes that all the word pieces attached to a tagged entity token, should also be tagged with the same entity, whether or not it was tagged.<|||||>A similar situation is happening in sentence 2. The clue is in the value for “index”. You’ll notice that the tokens aren’t contiguous and so aren’t being grouped together. This implies that some middle word pieces aren’t being tagged as entities.<|||||>For the [UNK] issue, this “might” be because that word piece token was out of vocabulary and so gets converted to [UNK] at the decoding step. Since this happens before entity grouping, I think safe to say this is unrelated to entity grouping and is related to how the raw NER forward pass is handled. Perhaps we can separate this from the above issue? Both will require separate PR’s to address.<|||||>Actually you're right it seems that sentences n°2 and n°4 are showing a different issue : if the index is not contiguous (because a part is missing in the prediction : "pp" for n°4 and "94" for n°2) then the grouping fails. It's indeed a different issue.<|||||>> For sentence 4, this is because the ##pp in “Appointments”, is not being tagged as an entity. This will require a separate PR that assumes that all the word pieces attached to a tagged entity token, should also be tagged with the same entity, whether or not it was tagged. Although I agree that it could be solved in a next PR, shouldn't this more 'holistic' view be preferable (and be the default). If one token in a word is 'missed' but the other four (e.g. PER-PER-O-PER-PER) are an entity the whole word is an entity (and not two separate entities). We 'know' what the word-level comprehends the model doesn't<|||||>@HHoofs agree that this should be the default. If the "word-level" implementation is submitted as a PR, this should not be the default behaviour and should be explicitly set.<|||||>I agree with that, what I meant however was the following case: `Italy` Let's say that this consists of three subtokens: `_It`, `a`, `ly` If the first and last tokens are assigned as Country en the middle as None, it would now result in a splitted output (if I understand correctly). I would suggest that the outputs of all three subtokens are averaged and than the highest output class is selected.<|||||>In pseudo-code, I would suggest the following (order): ``` ... # first check if the user want to have grouped entities if self.grouped_entities: word_scores = [] for token in tokens: # first input should always be a 'new word' if is_new_word(token): word_scores.append(score) score = np.zeros((0,?)) score = np.sum(score, token['score']) # now you have a list of summed entity scores for each seperate word word_scores.argmax(axis=-1) ... else: return ... <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,076
closed
Colab session crashes on transformers
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details Hi everybody, Something really strange happens in Colab after the recent updates of the DataCollate class. I don't know if the two things are correlated, however, after I install the following packages !git clone https://github.com/huggingface/transformers.git !pip install ./transformers !pip install -U nlp and then I try to load them import nlp from transformers import T5Tokenizer The Colab instance crashes. Please find below the log. **WARNING:root:kernel 485c962d-efa0-4103-9c68-ed22abd8839f restarted**
06-17-2020 08:16:31
06-17-2020 08:16:31
The same thing for me. Colab (using TPU) crashes when importing transformers package. ![image](https://user-images.githubusercontent.com/35801846/84882301-ee662800-b086-11ea-9a1c-ff07853af864.png) ![image](https://user-images.githubusercontent.com/35801846/84882177-c5459780-b086-11ea-8c47-4b06eb3e8249.png) <|||||>Same issue.<|||||>I'm trying to reproduce, but can't manage to make colab crash. @khalilRhouma, @amitness, did you have similar code to @antoniomastro1996? Would it be possible for you to show me the code you used?<|||||>@LysandreJik yes, of course you can follow this: https://colab.research.google.com/drive/1jwXgtOXE8v8_qkiOCbjFQRFC5semK8T7?usp=sharing<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,075
closed
Converting to ONNX doesn't apply to all models
Is it possible to list or highlight which models are ONNX convertible? On a fresh python environment: ``` python -m pip install -U transformers python -m pip install mosestokenizer ``` The transformer version on the machine is `2.11.0` When trying to convert the https://huggingface.co/transformers/model_doc/marian.html to ONNX, it throws the following error: ``` $ python convert_graph_to_onnx.py --framework pt --model Helsinki-NLP/opus-mt-en-ROMANCE opus-mt-en-romance.onnx Neither PyTorch nor TensorFlow >= 2.0 have been found. Models won't be available and only tokenizers, configurationand file/data utilities can be used. ONNX opset version set to: 11 Loading pipeline (model: Helsinki-NLP/opus-mt-en-ROMANCE, tokenizer: Helsinki-NLP/opus-mt-en-ROMANCE) /Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/tokenization_utils.py:828: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead. warnings.warn( stdbuf was not found; communication with perl may hang due to stdio buffering. Error while converting the model: 'NoneType' object has no attribute 'from_pretrained' ``` Then after installing pytorch ``` $ python -m pip install -U pytorch $ python convert_graph_to_onnx.py --framework pt --model Helsinki-NLP/opus-mt-en-ROMANCE opus-mt-en-romance.onnx ONNX opset version set to: 11 Loading pipeline (model: Helsinki-NLP/opus-mt-en-ROMANCE, tokenizer: Helsinki-NLP/opus-mt-en-ROMANCE) /Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/tokenization_utils.py:828: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead. warnings.warn( stdbuf was not found; communication with perl may hang due to stdio buffering. Downloading: 100%|██████████████████████████████| 312M/312M [01:59<00:00, 2.61MB/s] Error while converting the model: Folder /Users/username/git-stuff/transformers/src/transformers is not empty, aborting conversion ``` Then after creating a new directory: ``` $ python ../convert_graph_to_onnx.py --framework pt --model Helsinki-NLP/opus-mt-en-ROMANCE opus-mt-en-romance.onnx ONNX opset version set to: 11 Loading pipeline (model: Helsinki-NLP/opus-mt-en-ROMANCE, tokenizer: Helsinki-NLP/opus-mt-en-ROMANCE) /Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/tokenization_utils.py:828: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead. warnings.warn( stdbuf was not found; communication with perl may hang due to stdio buffering. Using framework PyTorch: 1.5.0 Found input input_ids with shape: {0: 'batch', 1: 'sequence'} Found input attention_mask with shape: {0: 'batch', 1: 'sequence'} Found output output_0 with shape: {0: 'batch', 1: 'sequence'} Found output output_1 with shape: {0: 'batch', 1: 'sequence'} Ensuring inputs are in correct order decoder_input_ids is not present in the generated input list. Generated inputs order: ['input_ids', 'attention_mask'] /Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:173: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if not padding_mask.any(): /Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:590: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert embed_dim == self.embed_dim /Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:591: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert list(query.size()) == [tgt_len, bsz, embed_dim] /Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:633: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert attn_weights.size() == (bsz * self.num_heads, tgt_len, src_len) /Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:642: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert key_padding_mask is None or key_padding_mask.size()[:2] == (bsz, src_len,) /Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/transformers/modeling_bart.py:654: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert attn_output.size() == (bsz * self.num_heads, tgt_len, self.head_dim) /Users/username/.pyenv/versions/3.8.0/lib/python3.8/site-packages/torch/onnx/utils.py:736: UserWarning: ONNX export failed on ATen operator triu because torch.onnx.symbolic_opset11.triu does not exist warnings.warn("ONNX export failed on ATen operator {} because " Error while converting the model: Exporting the operator triu to ONNX opset version 11 is not supported. Please open a bug to request ONNX export support for the missing operator. ``` It is understandable that ONNX might not be supporting all models that are available through Huggingface's transformers. **But is there a way to know which model(s) are ONNX convertible and which aren't?**
06-17-2020 08:01:56
06-17-2020 08:01:56
Here is a related PyTorch issue: https://github.com/pytorch/pytorch/issues/32968. Here is a list of pretraineds model that can be exported to ONNX: bert-base-cased, distilbert-base-uncased, roberta-base, gpt2, distilgpt2, openai-gpt, albert-base-v2, xlnet-base-cased So far, the following models have problem in exporting to ONNX: bart-large, transfo-xl-wt103, t5-base, xlm-mlm-en-2048 <|||||>Anyone had any luck resolving this issue? The Pytorch issue linked above has a couple of potential workarounds. I notice it affects Pegasus as well<|||||>This pull request was closed by its author but doesn't raise the issue when I run convert on a Bart model. https://github.com/huggingface/transformers/pull/6334 <|||||>Can a custom distilbert model which is saved in local disk be converted to onnx?
transformers
5,074
closed
how to get complete URLs to weights in 2.11.0
# ❓ Questions & Help Since it is related to the newest release, I would like to raise the question here. As our servers in the company are not able to access the HF url, we have to download the models locally and upload to the servers. Now seems I couldn't find the links to download `pytorch_model.bin`, `config.json`, `vocab.json`, `merge.txt`. The only one I can find is https://huggingface.co/facebook/bart-large But it only shows: | File name | Last modified | File size| |-- | -- | --| |config.json | Fri, 24 Apr 2020 15:58:48 GMT | 1.2KB| |pytorch_model.bin | Wed, 12 Feb 2020 19:53:45 GMT | 1.5GB| |rust_model.ot | Sat, 25 Apr 2020 15:33:01 GMT | 1.9GB| There is no `vocab.json`, `merge.txt`. I want to find the complete URLs to these files.
06-17-2020 07:48:44
06-17-2020 07:48:44
They are the same as Roberta's ```python vocab_url = "https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-vocab.json" merges_url = "https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-merges.txt" ``` Is your question more general than just `bart-large`? (Feel free to close if not :) )<|||||>Thanks and yes. I'm interested in a more general case to retrieve the URLs to weights for different models. Currently, what I did actually is switching back to 2.10.0. And go to corresponding `modeling_xxx.py` and find the download link.<|||||>You can find URLs to specific files on each model page (click on "List all files"): https://huggingface.co/distilbert-base-cased#list-files<|||||>Hi @julien-c , as mentioned, I actually know this way, but "list all files" seems not giving me the `vocab.json`, `merge.txt`, OR are they not required? <|||||>Not all tokenizer types use those files. For instance Wordpiece (e.g. bert) is just one vocab.txt file.<|||||>Yup. I understand that, just named an example. But just want to make sure that files in https://huggingface.co/distilbert-base-cased#list-files are sufficient for me to use.
transformers
5,073
closed
fix typo
06-17-2020 06:37:55
06-17-2020 06:37:55
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=h1) Report > Merging [#5073](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e4aaa4580515446cd5a2972ab42fec0b95819c84&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5073/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5073 +/- ## ======================================= Coverage 77.26% 77.26% ======================================= Files 133 133 Lines 22146 22146 ======================================= + Hits 17110 17111 +1 + Misses 5036 5035 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5073/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=footer). Last update [e4aaa45...c2c5e07](https://codecov.io/gh/huggingface/transformers/pull/5073?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for catching it.
transformers
5,072
closed
🐛 [TFTrainer] Wrong number of optimization steps
# 🐛 Bug It seems TFTrainer compute the wrong number of optimization steps : https://github.com/huggingface/transformers/blob/e4aaa4580515446cd5a2972ab42fec0b95819c84/src/transformers/trainer_tf.py#L79 **It does not take into account the gradient accumulation**. --- I believe this line should be changed to : ``` if self.args.dataloader_drop_last: approx = math.floor else: approx = math.ceil self.train_steps: int = approx(self.num_train_examples / (self.args.train_batch_size * self.args.gradient_accumulation_steps)) ``` --- Also, on TPU `drop_remainder` should be called on gradient accumulation steps as well. https://github.com/huggingface/transformers/blob/e4aaa4580515446cd5a2972ab42fec0b95819c84/src/transformers/trainer_tf.py#L81-L86 Should be changed to : ``` ds = ( self.train_dataset.cache() .shuffle(self.num_train_examples) .batch(self.args.train_batch_size * self.args.gradient_accumulation_steps, drop_remainder=self.args.dataloader_drop_last) .unbatch() .batch(self.args.train_batch_size) .prefetch(tf.data.experimental.AUTOTUNE) ) ``` --- @jplu Maybe to add in #5065 ?
06-17-2020 01:23:18
06-17-2020 01:23:18
You are right, the number of optimization steps is already fixed in an another PR (#5051). And indeed the batch size for TPUs has to take into account the gradient accumulation, but I put this for later because we don't have yet a proper way to differenciate a run on TPU/CPU/GPU. Also the `unbatch` then `batch` is not needed. Same thing apply later when computing the loss in the training step.<|||||>Thanks for your answer @jplu ! Can I ask you clarification about this : >Also the unbatch then batch is not needed. Same thing apply later when computing the loss in the training step. We still need the data to be batched based on the batch_size_per_device, not the total batch size, right ? Is the `floor` / `ceil` needed ? Is there any other changes I should apply locally to make it work ?<|||||>I'm pretty sure that the TPUs as to be set for the full size of batches (including those with the accumulation). | Is the floor / ceil needed ? `floor` and `ceil` are also needed in order to be sure you get the proper approximation. | Is there any other changes I should apply locally to make it work ? There are certainly other changes to make, but I still have to figure out which one :smile: Don't hesitate to participate on the PR I have opened ^^<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,071
closed
glue.py Data Processor Index Error for Large Data
# 🐛 Bug ## Information I am using `bert-base-uncase` and `roberta-base` for sentence pair classification, following the same exact example concept and data processing as MRPC. I have a large collection of private data that is in the _exact same format of MRPC_, and I am using `run_glue.py`as a base code for running my data. Everything works great when I have smaller data files (between 10K-200k samples) but the issue happens when I increase the data size (300K and above); I am not sure if the issue is a bug or a memory issue. Here is the issue that happens: ``` File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 200, in _create_examples text_a = line[3] IndexError: list index out of range ``` The data, the format and the values are consistent (I am not feeding in an empty file). If I concatenated the data in smaller samples works just fine. Is this an issue with the data processor or some sort of memory issue? it is perhaps important to note that I have not had any issues with using this large dataset for training different models. ## To reproduce Steps to reproduce the behavior: 1. Have a _title-pair data_ in the exact format of MRPC that is large, say larger than 300K 2. Run the `run_glue.py` with MRPC as the task and you should get the following error: ## The Error ``` Traceback (most recent call last): File "run_glue.py", line 262, in <module> main() File "run_glue.py", line 140, in main train_dataset = GlueDataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/transformers/data/datasets/glue.py", line 118, in __init__ examples = self.processor.get_train_examples(args.data_dir) File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 179, in get_train_examples return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 200, in _create_examples text_a = line[3] IndexError: list index out of range ``` ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-4.14.171-105.231.amzn1.x86_64-x86_64-with-glibc2.9 - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0 (True) Thank you very much for your time and help, in advance.
06-17-2020 00:36:57
06-17-2020 00:36:57
Hi Ali! Based on your error info, it seems that a line in your `train.tsv` does not follow the MRPC format. Or more specifically, that line may contain an empty field. Could you add a try-except to catch and print that line? It'll look like: ```python try: text_a = line[3] except: print(line) ```<|||||>@JetRunner Thank you very much for your suggestion. I found that there was a weird entry (that was not NaN nor empty) in some column of my data, and removing it with some modifications fixed the issue. Thank you again for your clear explanation and suggestion.
transformers
5,070
closed
Errors while running pytest
# ❓ Questions & Help Hi! I followed the installation procedures from source for a conda environment. `make test-examples` returns with the following error: ``` FAILED examples/token-classification/test_ner_examples.py::ExamplesTests::test_run_ner - AssertionError: 2.1329751014709473 not less than 1.5 FAILED examples/test_examples.py::ExamplesTests::test_run_glue - AssertionError: 0.5 not greater than or equal to 0.75 ``` I also tried `make test` but get the following errors: ``` FAILED tests/test_modeling_distilbert.py::DistilBertModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0. FAILED tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather input tensors must have the same number of dimensions: got 1, ... FAILED tests/test_modeling_roberta.py::RobertaModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0. FAILED tests/test_modeling_bert.py::BertModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0. FAILED tests/test_modeling_albert.py::AlbertModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0. FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0. ``` I followed the setup steps mentioned in the original readme. I am running the tests in a 8 GPU machine. Please let me know how to fix this. output of `trasnformers-cli env` ``` - `transformers` version: 2.11.0 - Platform: Linux-5.3.0-1023-aws-x86_64-with-debian-buster-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ``` Thanks!!
06-16-2020 21:08:30
06-16-2020 21:08:30
Hi, could you provide the versions of your software, like asked in the issue template? Namely tansformer version, python version, pytorch version, tensorflow version ...<|||||>updated<|||||>I still don't know what's your transformer version, which is arguably the most important version of the list :sweat_smile: Do you mind running `transformers-cli env` in your environment? It should output something along the lines of: ``` - `transformers` version: 2.11.0 - Platform: Linux-5.6.15-1-MANJARO-x86_64-with-arch-Manjaro-Linux - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ```<|||||>apologies :( , updated again.<|||||>I met the same error with the same environment. <|||||>@LousyLory Hi, have you solved this problem? I have try python3.6/3.7 + torch 1.5/1.4 in venv/conda. And I have checked my environment using `transformers-cli env`, which output same with @LousyLory . All of them fail the test. For conda + python3.6 + torch 1.4: the failure case is: (The other cases are just like this) ``` ======================================================================================================================== short test summary info ========================================================================================================================= FAILED tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward - RuntimeError: tensor.ndimension() == static_cast<int64_t>(expected_size.size()) INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1579022034529/work/torch/csrc/cud... FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather got an input of invalid size: got [2, 2, 4, 7, 8], but expected [2, 4, 4, 7, 8] (gather at /opt/conda/conda-bld/pytorch_1579022034529/work/torch/csrc/cud... FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather got an input of invalid size: got [7, 2, 32], but expected [7, 4, 32] (gather at /opt/conda/conda-bld/pytorch_1579022034529/work/torch/csrc/cuda/comm.c... FAILED tests/test_modeling_ctrl.py::CTRLModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather got an input of invalid size: got [2, 2, 4, 7, 8], but expected [2, 4, 4, 7, 8] (gather at /opt/conda/conda-bld/pytorch_1579022034529/work/torch/csrc/cud... FAILED tests/test_modeling_mobilebert.py::MobileBertModelTest::test_multigpu_data_parallel_forward - RuntimeError: Caught RuntimeError in replica 0 on device 0. ================================================================================================= 5 failed, 1054 passed, 560 skipped, 302 warnings in 253.59s (0:04:13) ================================================================================================== ``` <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,069
closed
Typo
06-16-2020 20:44:17
06-16-2020 20:44:17
transformers
5,068
closed
Fix all sphynx warnings
This PR touches a lot of files but nothing that should break anything, it's just there to fix all sphynx warnings. Why? Well some of them are just there to annoy us and don't have any effect, but for roughly half of them, there is something wrong going on in the docs so it's best to fix them all, especially to make it easier to spot new warnings introduced when writing new docs. The only thing that is a real change is that I removed `members` in the `AdamWeightDecay` because it was documenting its `apply_gradients` method using the docstring from keras which is not sphynx-compatible (it was not rendering properly in our docs). If we really want that method documented (it was the only one), I can rewrite the docstring.
06-16-2020 20:31:50
06-16-2020 20:31:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=h1) Report > Merging [#5068](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/439aa1d6e9c953069f75fc23c737221d0df2c977&el=desc) will **increase** coverage by `0.98%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5068/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5068 +/- ## ========================================== + Coverage 76.45% 77.43% +0.98% ========================================== Files 130 130 Lines 22024 22024 ========================================== + Hits 16839 17055 +216 + Misses 5185 4969 -216 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `92.85% <ø> (ø)` | | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | | | [src/transformers/configuration\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.00% <ø> (ø)` | | | [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <ø> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.58% <ø> (ø)` | | | [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `78.16% <ø> (ø)` | | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <ø> (ø)` | | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.19% <ø> (ø)` | | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.33% <ø> (ø)` | | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `91.28% <ø> (ø)` | | | ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/5068/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=footer). Last update [439aa1d...6a52a4b](https://codecov.io/gh/huggingface/transformers/pull/5068?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,067
closed
Modify BERT/BERT-descendants to be TorchScript-able (not just traceable)
# 🚀 Feature request Modify BERT models (src/transformers/modeling_bert.py) to conform to TorchScript requirements, so they can be ``jit.script()``-ed, not just ``jit.trace()``-ed (as is [currently the only supported option](https://huggingface.co/transformers/torchscript.html)) *Note:* I have a working version implementing this, which I would like to contribute. See below. ## Motivation A scriptable model would allow for variable-length input, offering big speedup gains and simplification (no need to create different models for different input lengths). In addition, it would avoid other potential pitfalls with tracing (e.g., code paths that are input dependent and not covered by the tracing example input). Related issues: https://github.com/huggingface/transformers/issues/2417 https://github.com/huggingface/transformers/issues/1204 possibly also https://github.com/huggingface/transformers/issues/1477 https://github.com/huggingface/transformers/issues/902 ## Your contribution I have a working PR that modifies all the models in src/transformers/modeling_bert.py and makes them TorchScript-able. I have not tested it on other models that use BERT components (e.g., albert), but it should be possible to expand the capability to those, as well. However, it would require some significant work to make it ready for submission: besides formatting, documentation, testing etc., my current version changes the method signatures, and I would need to avoid that to maintain backward-compatibility. Before putting in that work, I'd like to make sure that such a PR is something you'd be interested in and would be willing to merge in, assuming it meets the requirements.
06-16-2020 19:32:10
06-16-2020 19:32:10
Hi! This is interesting. Could you resume what are the changes that would be needed in order to have our models scriptable?<|||||>Sure, mostly my changes fall into these categories: ### 1. Class members can only be basic types, None, nn.Modules, or list or tuple thereof - Solution: don't save whole config in the model, only individual entries you need, which are basic types - Solution for nn.functional: use the nn.Module equivalent of nn.functional - Solution for other functions: define and call the function globally, not as a class member ### 2. Inputs are assumed to be Tensors - Solution: use typing to tell TorchScript the types (note - requires typing to be supported. I checked in python 3.7, but not 3.5 or 3.6) ### 3. TorchScript can't figure out that an Optional is not None - Solution: add assertions to help TorchScript ### 4. Variable types are not allowed to change depending on conditionals - Solution: use consistent types (with Optional to tell TorchScript that a variable/argument can be None) - this is where I had to change the interface, since current BERT models can optionally return attention probabilities. Had to change so that they always return the same sized output tuple, with None values, instead). ### 5. TorchScript can't handle the expand (*) operator on lists - Solution: explicitly enumerate the arguments ### 6. You can't use nn.Modules as local variables (take variable number of args) - Solution: use the nn.functional equivalents of the modules. ### 7. TorchScript doesn't know about nn.ModuleList's enumerate - Solution: use a regular for loop Most of these are pretty small changes and do not affect the logic. #4 and #1c can be tricky, and #5 might be an issue with recent changes made here: https://github.com/huggingface/transformers/pull/4874<|||||>Hi @sbrody18, Thanks for opening this issue and taking the time to dive into our TorchScript support. Regarding **_A scriptable model would allow for variable-length input, offering big speedup gains and simplification_**: Do you have some numbers to compare against the current transformers library? We ran some TorchScript tests and the differences where not that huge at that time, may be this has changed since? I (and probably others) would be very interested in knowing more on this aspect. Regarding the list of changes you suggested: I'm currently not really in favour of such changes as they are almost changing all the way the library is designed and would have an impact on all the models. Some of them might be further discussed if there are real performance benefits.<|||||>Hi @mfuntowicz, My co-workers and I have run the experiments that show that inference time scales more-or-less linearly with the input size (also supported in the linked article below). Assuming you are trying to run in C++ (which is the reason to use TorchScript), the current solution, using `trace()` means that you can only use fixed length input - you have to set a large value for max_length to support your longest expected input, and zero-pad all input to the max-length. That means if your max_length is 1000 tokens and your average length is 20 tokens, your inference is taking 50x longer than it should. You can see an example of how big a difference this makes, [here](https://medium.com/roblox-tech-blog/how-we-scaled-bert-to-serve-1-billion-daily-requests-on-cpus-d99be090db26), under 'Scenario #3: Smaller Inputs (Dynamic Shapes)'. I'm guessing the tests you ran were focused specifically on the technical behavior of the models on a fixed input set and didn't take into account the max-length issue. Also, this is only an issue if you need to use TorchScript in order to run in C++. Re. the change to design, my intention is to keep the model changes to a minimum (e.g., adding type hints and asserts does not change the design at all) and make sure they are fully backwards compatible. There would still be some changes required, but I don't think they are drastic. As I said in the original post, I have a PR where I did a lot of the work, and I'd be happy to work with someone to figure out how to get it to a state where it can be merged.<|||||>@sbrody18 do you mind sharing your fork ? <|||||>Yes, I can do so, but it may have to wait a week or two - things are busy at the moment.<|||||>I am very interested in this work as well. Our team would like to be able to use TorchScript so we can train without depending on Python. If there's any way I can be of help, I would gladly offer some time here!<|||||>Sorry for the delay. I hope to have a reasonable PR later this week.<|||||>My change is available at https://github.com/sbrody18/transformers/tree/scripting Note that it is based off of a commit from earlier this month: https://github.com/huggingface/transformers/compare/ef0e9d806c51059b07b98cb0279a20d3ba3cbc1d...sbrody18:scripting Since then there have been changes made to the BertModel interface adding a return_tuple argument and changing the return type of the forward method, and this would require more effort to resolve. I listed the principles I used in https://github.com/huggingface/transformers/issues/5067#issuecomment-644989375. The original components tended to return different sized tuples, depending on arguments, which is problematic for TorchScript. When a component BertX required an interface change to be scriptable, I made a BertScriptableX version with the modifications, and had the BertX component inherit from it and just modify the output so it is compatible with the original API. I made scriptable versions of BertModel and all the BertFor\<Task\> classes, except BertForMaskedLM (some complexities there were too much work for a proof of concept). I added a [test](https://github.com/sbrody18/transformers/blob/scripting/tests/test_modeling_bert.py#L529) to demonstrate the scripting capability. Note that my change disables the [gradient_checkpoint path](https://github.com/sbrody18/transformers/blob/scripting/src/transformers/modeling_bert.py#L474-492) in the encoder. I think this can be resolved, but I didn't have the time to work on it.<|||||>@sgugger @joeddav: see comment above for preliminary PR. Probably too big and complicated to try to merge as is, but would be happy to work with someone to break things down into reasonable chunks.<|||||>Thanks for all the work. Looking at this and our recent changes in the model API (in particular the return_dict argument) I think we probably won't be able to have the models be fully compatible with TorchScript. What is possible however would be to have a second version of the models that don't have the option of return_dict (we can also remove output_hiddens/output_attentions if it makes life easier) and would be fully scriptable. Since you already started with some components in a different class, I think we should have two models (let's say `BertModel` and `ScriptableBertModel`) with the same named parameters so you can seemlessly save/load from one to the other (a workflow would then be to experiment with `BertModel`, save the fine-tuned model and then go to `ScriptableBertModel` for inference for instance). Then I'm not sure what's easiest: - have the two inherit from some base class and have a minimal of methods that need to be different (probably just the forward?) - or have the second class be a complete rewrite. I think we should focus on having a proof of concept on one model before moving forward with others. <|||||>That makes sense to me. It will probably result in some amount of code duplication, and we'd need to make sure we keep the named parameters in sync, but probably easier to maintain. So would you suggest the ScriptableBertModel is a separate file?<|||||>Not necessarily a separate file, I guess it depends on the amount of code to rewrite. I think we can worry about this in a second stage, once we have a good poc.<|||||>@sgugger Please see POC implementation in PR above.<|||||>@sbrody18 in the original PR https://github.com/huggingface/transformers/pull/6846 you created for this issue, you mentioned you saw a large perf increase with dynamic sequences. What did you use as a test to make that determination?<|||||>@kevinstephano - see discussion and conclussions [here](https://github.com/huggingface/transformers/pull/6907#issuecomment-687343119) We saw a large perfomance increase with an older version of PyTorch, where traced models required the input to be the same length as the one used for tracing, making it necessary to pad short sequences at inference, and adding a lot of unnecessary computation overhead. With recent versions of PyTorch (>=1.3, I think), this is no longer the case.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi, I just tried `jit.script()`ing using Bert from the PR (just copied modeling_bert, modeling_utils and replaced relative imports of other dependencies with imports from transformers master branch) I see there are try blocks left in the code, which cause `jit.script` to fail: ``` UnsupportedNodeError: try blocks aren't supported: File "/zhome/1d/8/153438/experiments/master-thesis/export_model/modeling_utils_script_proof.py", line 131 Get torch.device from module, assuming that the whole module has one device. """ try: ~~~ <--- HERE return next(self.parameters()).device except StopIteration: ``` @sbrody18 how did you export the model? I guess the workaround would be to remove try blocks, but apparently it did work for you as it is. <|||||>@fteufel you can see #6846 for a stand-alone implementation that worked **at a previous version of the transformers library**. Maybe that's good enough for your purposes? The transformers library has changed significantly since these PRs and I'm not sure if that try was added. If you are using code from the transformers master branch in the model itself, it's likely you will encounter several unscriptable bits. Specifically for the next function, you can either: a. remove the try block, since there should always be at least one parameter on the model b. use the next with default: first_param = next(self.parameters(), None) if not first_param: <handle it> return first_param.device c. figure out a better way to decide the model device :)<|||||>@sbrody18 It seems have not been merged to official transformers ? My transformers Version: 4.21.3, and it can not use `jit.script` to convert BERT model to TorchScript.
transformers
5,066
closed
Tokenization+Transformers works with PyTorch but not TensorFlow on TPU
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): RoBERTa Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. In order to use Huggingface's tokenizer with TensorFlow's data pipeline (`tf.data.Dataset.map()`), the tokenize function must be wrapped in `tf.py_function()` or `tf.numpy_function()` (issue #3851). 2. Neither of these functions are not supported on TPU (tensorflow/tensorflow#30818). 3. This makes it impossible to run Huggingface transformers + tokenizer on a TPU using TensorFlow 4. However, the same tokenization on a TPU works under PyTorch (!) One workaround is to do tokenization before entering the TF data pipeline, but unfortunately my dataset is too large for that. Example of code that is necessary, but fails on a TPU: ``` def tokenize_encode_map_fn(text): encoded = tf.py_function(tokenize_encode, # tokenize_encode is a wrapper around the Huggingface tokenizer and encoder inp=[text, hypothesis], Tout=[tf.int32, tf.int32]) return {"input_ids": encoded[0], "attention_mask": encoded[1]} tf_dataset.map(map_fn) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Be able to run tokenizer on a TPU with TensorFlow, like you can in PyTorch <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: transformers-2.11.0 - Platform: Google Colab TPU - Python version: Python 3.6.9 - PyTorch version (GPU?): - Tensorflow version (GPU?): TensorFlow 2.2.0 / TPU - Using GPU in script?: - Using distributed or parallel set-up in script?:
06-16-2020 19:22:13
06-16-2020 19:22:13
This makes sense! Do you have some examples of some tokenization that can happen on TPUs with TensorFlow, so that we may consider what we have to do to enable this?<|||||>I am not totally sure, but one thing to look at would probably be TF's native `tf.keras.preprocessing.text.Tokenizer`, which (I think) works when used within tf.data.Dataset maps on a TPU. https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer<|||||>Assigning myself to keep this in mind<|||||> > I am not totally sure, but one thing to look at would probably be TF's native `tf.keras.preprocessing.text.Tokenizer`, which (I think) works when used within tf.data.Dataset maps on a TPU. > > https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer Interesting. Is there a way to translate HF's tokenziers to this? I am wondering it's close to being possible that we don't need to convert text to Ids on the HF's data processors. <|||||>TensorFlow 2.3.x adds the Sentencepiece tokenizer to `tensorflow_text`. You can use this short script to turn a HuggingFace tokenizer into a `tensorflow_text.SentencepieceTokenizer`: https://gist.github.com/noahtren/6f9f6ecf2f81d0975c4f54afaeb95318 I tested it on TPU and it's been working for me. My experience is that HuggingFace tokenizers are wrappers for https://github.com/google/sentencepiece so it's really simple to make it compatible with TensorFlow graph mode. Not sure yet if this works for all huggingface pretrained tokenizers.<|||||>Did you try on the hf rust tokenizers as well? <|||||>@Santosh-Gupta No, I haven't tried that<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,065
closed
[WIP] TF Trainer with TPUs
This PR is to make the TF trainer fully compliant with the TPUs. Should fix #5042, #4996 and #4994.
06-16-2020 18:17:30
06-16-2020 18:17:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=h1) Report > Merging [#5065](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fc24a93e6493c2689e5585d12b7c43730ad9b3ea&el=desc) will **decrease** coverage by `0.05%`. > The diff coverage is `12.28%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5065/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5065 +/- ## ========================================== - Coverage 79.02% 78.96% -0.06% ========================================== Files 138 138 Lines 24064 24089 +25 ========================================== + Hits 19017 19023 +6 - Misses 5047 5066 +19 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `17.92% <8.00%> (-0.77%)` | :arrow_down: | | [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `53.19% <42.85%> (+2.02%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <0.00%> (-0.24%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.92% <0.00%> (+0.29%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=footer). Last update [fc24a93...9326d27](https://codecov.io/gh/huggingface/transformers/pull/5065?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@Colanim Can you summarize your findings here? It would be better suited than going across multiple issues.<|||||>@jplu Sure ! * [x] Optimizer was passed as arguments to `_step` function, but we can pass only Tensors (issue #4994). This is fixed by ef02be8. * [x] Optimization steps were not computed correctly : it was ignoring gradient accumulation (issue #5072). This is fixed by a897e09. * [x] Computation of number of steps should take into account `dataloader_drop_last` (mentioned in #5072), and use `math.floor` in that case (instead of `math.ceil`). Fixed in #5051 * [ ] I'm still having error as described in #4996, but I didn't figure the reason yet. Maybe it's my model that has a problem, not `TFTrainer`.<|||||>Thanks for having sum up the issues here. > * [ ] Computation of number of steps should take into account `dataloader_drop_last` (mentioned in #5072), and use `math.floor` in that case (instead of `math.ceil`). Not fixed yet. This is fixed in PR #5051. > * [ ] I'm still having error as described in #4996, but I didn't figure the reason yet. Maybe it's my model that has a problem, not `TFTrainer`. This is what I'm currently looking for, but still not figuring out why :/ Which TPUs are you using, over ctpu, Cloud AI or Colab?<|||||>> Which TPUs are you using, over ctpu, Cloud AI or Colab? I'm currently using ctpu, but I could see similar issue when using Colab.<|||||>For me, this is the problem : https://github.com/huggingface/transformers/blob/5f721ad6e48c9d846de25c3fefa0e50a306cbf10/src/transformers/trainer_tf.py#L388-L389 Somehow the Exception is not catched on TPU, which crash the training. Using `max_steps` argument instead of `num_train_epochs` fix the problem because we repeat the dataset, and therefore we never have an out of range error. In the eval code, since we iterate the dataset without repeat, it's causing the same problem.<|||||>Can I see all your logs output during the training?<|||||>What is the command line you are using to create your TPU and run the process? In order to be aligned with the same errors :)<|||||>@jplu finally I was wrong : even when using `max_steps` I'm having the problem. It always happen at the beginning of the end of first epoch (for both validation and training). I think it's due to how the dataset is iterated. According to [this link](https://www.kaggle.com/mgornergoogle/custom-training-loop-with-100-flowers-on-tpu), we should have a single iterator for the whole training procedure (iterator should not be reset after each epoch), and always `repeat` the dataset indefinitely. Even for validation dataset. Unfortunately I'm having trouble with GCP right now, I can't try things on my end..<|||||>Ok, can you share your command lines here please. Because even the size computation cannot be run on TPU with TF 2.2.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,064
closed
Reorganize documentation
This PR does two things: - reorganize the doc topics in five sections - complete the model list in the index Also I cut all lines at 119 chars (like the code) otherwise it's not readable in visual studio code (and I imagine other viewers), removed the stars from authors since it wasn't pointing to anything (can add them back but we should explain what they mean in that case) and made all authors list comma-separated with one last 'and'.
06-16-2020 17:11:05
06-16-2020 17:11:05
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=h1) Report > Merging [#5064](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d5477baf7d87b9bdad386f2f317732b85277b06b&el=desc) will **decrease** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5064/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5064 +/- ## ========================================== - Coverage 77.41% 77.36% -0.06% ========================================== Files 130 130 Lines 22023 22023 ========================================== - Hits 17050 17037 -13 - Misses 4973 4986 +13 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.31%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=footer). Last update [d5477ba...d7a3d5d](https://codecov.io/gh/huggingface/transformers/pull/5064?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,063
closed
Non-deterministic training issue on GPU: TF-BERT
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): TF-BERT Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) SST-2 * [ ] my own task or dataset: (give details below) ## To reproduce In spite of combining learnings from: * [the "complete recipe" in NVIDIA's slides from gputechconf](https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9911-determinism-in-deep-learning.pdf) * [a recently suggested workaround](https://github.com/tensorflow/tensorflow/issues/38185#issuecomment-643014439) for non-determinism issues with crossentropy loss ... I am still arriving at the following [short, non-deterministic colab notebook example](https://colab.research.google.com/drive/1VSU8lYFD0E1HKZrIL1MvyIRwAktlSF_t?usp=sharing) to train BERT . My results for the sum of model weights (as computed with [this suggested function](https://github.com/NVIDIA/tensorflow-determinism/issues/2#issuecomment-548210203)) after training **for only 5 steps** is (differences are **`highlighted`** below): | | Device | Before training | After training | | ------------- | ------------- | ------------- | ------------- | | Run 1 | GPU | -641227.5609667897224 | -641237.442 **`5159916282`** | | Run 2 | GPU | -641227.5609667897224 | -641237.442 **`3093758523`** | | | | | | | Run 1 | CPU | -641227.5609667301178 | -641238.1506845243275 | | Run 2 | CPU | -641227.5609667301178 | -641238.1506845243275 | This variance gets increasingly more pronounced when the model is trained for longer periods of time. I am expecting a general problem with the computational graph with BERT introducing non-determinism. As a result, this could affect a large part of the huggingface community. Please keep in mind that determinism is of key importance in certain industries and also a pre-requisite for reproducible research. Could you please help identify the source of non-determinism and provide guidance on how we can resolve it? Steps to reproduce the behavior: 1. Execute colab notebook above on the GPU runtime using Tensorflow 2.2.0, observe non-deterministic behavior 2. Execute colab notebook above on the CPU runtime using Tensorflow 2.2.0, observe deterministic behavior ## Expected behavior Training should be deterministic both on GPU and CPU runtime for TF 2.2.0. ## Environment info * tensorflow==2.2.0 * nlp==0.2.1 - `transformers` version: 2.11.0 - Platform: Linux, Ubuntu 18.04.3 LTS bionic - Python version: 3.6.9 - PyTorch version (GPU?): - - Tensorflow version (GPU?): 2.2.0-gpu - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
06-16-2020 16:31:06
06-16-2020 16:31:06
Hi! [This StackOverflow issue](https://datascience.stackexchange.com/questions/14812/making-keras-tensorflow-code-execution-deterministic-on-a-gpu) might be of interest to you. Namely: > In fact, the randomness(non-determinstic) is a behavior of GPU. > > The reason behind is that cuDNN(and othere CUDA stuffs) uses a non-deterministic algorithm to compute gradients, thus we can't determine anything.<|||||>Note that this issue is being addressed as an [issue in the tensorflow-determinism repo](https://github.com/NVIDIA/tensorflow-determinism/issues/19). I have also added a reference to that repo in the above-mentioned Stack Exchange / Data Science question.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,062
closed
What do the following parameters mean during the initialization of T5 model?
Hello, I am aware of the general Transformer model and I believe it is the same model used in T5 architecture. I know we have the input vocab_size which is the total number of vocabulary size. Besides this, the important parameters would be the embedding size (the size of the embedding of each token), the number of layers, the number of heads and others. Particularly, looking at the T5Config class that is used to initialize a T5 model, what is the d_kv and d_model parameter? Is d_model the size of the embedding and in which case what is the d_kv? The docstring is not really clear for me. Thanks for your help.
06-16-2020 15:54:45
06-16-2020 15:54:45
I added a description of `d_kv`. `d_model` is as you say the size of the embedding. `d_kv` is the size of the key, query, value projections. If you look at this blog post: http://jalammar.github.io/illustrated-transformer/ `d_model` corresponds to the size of a vector `x_1` and `d_kv` of a vector `q_1, k_1, v_1`
transformers
5,061
closed
More flexible wandb support for Trainer
# 🚀 Feature request **A.** Make it possible to initialize wandb outside Trainer class. **B.** Add `use_wandb` argument to the Trainer arguments. ## Motivation **A.** Currently, wandb inside Trainer configuration if very limited. There are only three environment variables `WANDB_WATCH`, `WANDB_PROJECT`, and `WANDB_DISABLED`. (And `WANDB_DISABLED` does not work properly in some cases). Making it possible to initialize wandb outside Trainer will allow us to: 1. Add custom fields to wandb.config 1. Add tags, notes and just to make configuration as flexible as possible 1. Upload files 1. Use `wandb.log` more safely outside the Transformers code It will also make the interaction with wandb more clear. **B.** It is a much more clear interface then an env variable. The question here is which option should have the priority? ## Your contribution There's a picture in my mind about how to do this and not to destroy backward compatibility. I can make a PR, but maybe need some minor help on writing tests.
06-16-2020 15:31:14
06-16-2020 15:31:14
Yeah I agree with the motivations here. From my experience I like to initialize wandb as soon as the main script starts which has benefits like - Capturing all the console logs, so if something failed before the execution reached the trainer class one could debug the issue through Wandb. This is specially useful for doing training in a kubernetes environment where the logs are not very easily available after a crash. - Instantiating at the beginning also allows us to capture all the CLI arguments properly before they might have been modified. This is a problem right now because Wandb only captures the training_args leaving out model_args or data args. This is important for replicating a training run in the future.<|||||>Doesn't it work already when initializing wandb outside? I believe that `wandb.init` does not create a new run if one is already running so all the functions should be available anywhere in the script.<|||||>When looking over the code, I was pretty sure it will initialize a new run. However, I checked and everything (mentioned in the issue) works smoothly. <|||||>Ok, thanks for checking @Guitaricet
transformers
5,060
closed
Make default_data_collator more flexible and deprecate old behavior
This PR does two things: - avoid breaking changes when people had a custom `DataCollator` with a `collate_batch` method (there is still the breaking change with `DataCollator` not being a class anymore) - makes `default_data_collator` more flexible by handling dicts on top of `InputExamples` classes coming from our examples This fixes #5049
06-16-2020 13:28:18
06-16-2020 13:28:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=h1) Report > Merging [#5060](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d5477baf7d87b9bdad386f2f317732b85277b06b&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `84.21%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5060/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5060 +/- ## ========================================== + Coverage 77.41% 77.43% +0.01% ========================================== Files 130 130 Lines 22023 22029 +6 ========================================== + Hits 17050 17059 +9 + Misses 4973 4970 -3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5060/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <50.00%> (+0.09%)` | :arrow_up: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5060/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `98.33% <93.33%> (+8.67%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=footer). Last update [d5477ba...857975c](https://codecov.io/gh/huggingface/transformers/pull/5060?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Refactored on @julien-c suggestion and removed the little hack to handle tensors and lists of ints (which was cauding a 4x slowdown on my tests). I just use the first features to test if we have a tensor (then stack) or not (then use torch.tensor).<|||||>LGTM!
transformers
5,059
closed
[cleanup] examples test_run_squad uses tiny model
This speeds up the examples tests from 120s to 70s. Note that this probably understates the difference, since all the relevant models are cached on my local machine. We can also improve `test_run_glue` in a future PR. I made an issue discussing improvements this PR does not fix: #5059 ### More detail: Before (master): 119.36s ```bash ============================ slowest test durations ============================ 42.69s call examples/test_examples.py::ExamplesTests::test_run_squad 21.20s call examples/test_examples.py::ExamplesTests::test_run_glue 19.02s call examples/token-classification/test_ner_examples.py::ExamplesTests::test_run_ner 13.65s call examples/test_examples.py::ExamplesTests::test_run_language_modeling 3.80s call examples/test_examples.py::ExamplesTests::test_generation ``` After: 69.3 Seconds ```bash ============================ slowest test durations ============================ 19.12s call examples/token-classification/test_ner_examples.py::ExamplesTests::test_run_ner 14.60s call examples/test_examples.py::ExamplesTests::test_run_language_modeling 13.49s call examples/test_examples.py::ExamplesTests::test_run_glue 3.87s call examples/summarization/test_summarization_examples.py::TestBartExamples::test_bart_run_sum_cli 3.08s call examples/translation/t5/test_t5_examples.py::TestT5Examples::test_t5_cli 2.64s call examples/summarization/test_summarization_examples.py::TestT5Examples::test_t5_cli 2.20s call examples/summarization/test_summarization_examples.py::TestBartExamples::test_t5_run_sum_cli 1.81s call examples/test_examples.py::ExamplesTests::test_run_squad 1.67s call examples/test_examples.py::ExamplesTests::test_generation ```
06-16-2020 12:47:53
06-16-2020 12:47:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=h1) Report > Merging [#5059](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d5477baf7d87b9bdad386f2f317732b85277b06b&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5059/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5059 +/- ## ======================================= Coverage 77.41% 77.41% ======================================= Files 130 130 Lines 22023 22023 ======================================= Hits 17050 17050 Misses 4973 4973 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=footer). Last update [d5477ba...6bef869](https://codecov.io/gh/huggingface/transformers/pull/5059?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Merging so that external contrib can pick up the next steps without conflicts. Feel free to add comments here or in #5057
transformers
5,058
closed
Error when loading Flaubert model
# 🐛 Bug ## Information I am trying to run the example run_ner.pl using the Flaubert model. But I got this error: Traceback (most recent call last): File "/py37/lib/python3.7/site-packages/torch/serialization.py", line 191, in _check_seekable f.seek(f.tell()) AttributeError: 'NoneType' object has no attribute 'seek' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/py37/lib/python3.7/site-packages/transformers/modeling_utils.py", line 516, in from_pretrained state_dict = torch.load(resolved_archive_file, map_location="cpu") File "/py37/lib/python3.7/site-packages/torch/serialization.py", line 387, in load return _load(f, map_location, pickle_module, **pickle_load_args) File "/py37/lib/python3.7/site-packages/torch/serialization.py", line 549, in _load _check_seekable(f) File "/py37/lib/python3.7/site-packages/torch/serialization.py", line 194, in _check_seekable raise_err_msg(["seek", "tell"], e) File "/py37/lib/python3.7/site-packages/torch/serialization.py", line 187, in raise_err_msg raise type(e)(msg) AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/py37/lib/python3.7/site-packages/transformers/modeling_auto.py", line 1098, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "/py37/lib/python3.7/site-packages/transformers/modeling_utils.py", line 519, in from_pretrained "Unable to load weights from pytorch checkpoint file. " OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. I got an other error when I added from_tf=True. Do you have idea how can I solve this problem.
06-16-2020 12:47:27
06-16-2020 12:47:27
Hi! Could you show what command you used to launch the script?<|||||>I was runing the token classification example : https://github.com/huggingface/transformers/blob/master/examples/token-classification/run.sh with flaubert-large-cased as model name. I tried to use the downloaded model from https://huggingface.co/flaubert models, but after 100 epochs the results are very bad, the model did not learn any thing. I Don't understand what is the problem with flaubert-large-cased model. Note that flaubert-base-cased model give good results on NER taks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,057
closed
Examples tests improvements
There are a few things about the `examples/` tests that are suboptimal: 1. They never use cuda or fp16, even if they are available. 2. The `@slow` decorator used in the main tests is not importable, so there are no @slow tests. 3. `test_run_glue` uses distilbert-case-cased. It should use a smaller model, one of the `tiny` family [here](https://huggingface.co/models?search=sshleifer/tiny) or a new tiny model. 4. There is no test coverage for TPU. Any help on any of these fronts would be much appreciated!
06-16-2020 12:45:32
06-16-2020 12:45:32
Hi, @sshleifer I would like to work on this issue. Shall I take this up. <|||||>Yes, I would pick one item from the list to start with. Make sure you pull first, I just merged some improvements.<|||||>@sshleifer I will work on the first one. Just to be clear I will note down what I have understood and what I have in mind to do. 1. The issue as per my understanding: The tests in the example folder are not up to the mark and we have to add certain parts to fix this. For this, as the first point suggests when running tests in the examples folder the tests is not checking if Cuda or fp16 is available. 2. There are 4 tests in the `test_examples.py` - text-classification(run_glue) - language-modeling(run_language_modeling) - question-answering(run_squad) - text-generation(run_generation) so each should run with cuda or fp16 if available. correct if I am wrong.<|||||>Good idea. 1) Yes. I think the desired behavior is if `torch.cuda.is_available()`: - assume fp16 is available - run the code with fp16 and cude. Try to do that for all tests. Some will likely break. You can add a TODO to those and keep them running on CPU for now. 1b) You probably need a GPU to do this PR. 2) There are more tests than that: ```bash $ ls examples/**/test*.py examples/adversarial/test_hans.py examples/summarization/bertabs/test_utils_summarization.py examples/summarization/test_summarization_examples.py examples/test_examples.py examples/token-classification/test_ner_examples.py examples/translation/t5/test_t5_examples.py ``` <|||||>You don't need to cover all those tests. Feel free to break the work into very small PRs and tag me on them.<|||||>Thanks, @sshleifer for the clarification I will start working on this.<|||||>> 2. The `@slow` decorator used in the main tests is not importable, so there are no @slow tests. This is no longer the case. ```from transformers.testing_utils import slow``` This item can be removed.<|||||>> 3. `test_run_glue` uses distilbert-case-cased. It should use a smaller model, one of the `tiny` family [here](https://huggingface.co/models?search=sshleifer/tiny) or a new tiny model. I tried a few and either they have a wrong head dimension as in `sshleifer/tiny-distilbert-base-cased` (9x2), but tests are (2x2), so it won't load as is (`size mismatch for classifier.weight:` and `size mismatch for classifier.bias`), or they perform terribly with the current test settings. I also did an experiment for the same for the suggested inside the existing test: ``` def test_run_language_modeling(self): stream_handler = logging.StreamHandler(sys.stdout) logger.addHandler(stream_handler) # TODO: switch to smaller model like sshleifer/tiny-distilroberta-base ``` with terrible results (perplexity > 5,000, whereas the current one < 35). So when these tiny models are suggested as a replacement to speed things up, what things are to be sacrificed? <|||||>Happy to do big models and mark slow. I just don't want to do big models when we are only testing output shape.<|||||>> Happy to do big models and mark slow. I just don't want to do big models when we are only testing output shape. So then we could write a test that uses a tiny model that does just that? i.e. no outcome quality checks. Leaving big models for quality checks with @slow.<|||||>Yes!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,056
closed
Add more tests on tokenizers serialization - fix bugs
Adds more tests on tokenizer serialization (test when adding tokens, special tokens, etc). Tokenizer's serialization was not thoroughly tested and actually had quite some holes and bugs. Fix related issues.
06-16-2020 11:55:41
06-16-2020 11:55:41
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=h1) Report > Merging [#5056](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b28b53713161a6299c757c32f7179a2cb2d8cbd7&el=desc) will **increase** coverage by `0.05%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5056/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5056 +/- ## ========================================== + Coverage 77.96% 78.02% +0.05% ========================================== Files 138 138 Lines 23838 23847 +9 ========================================== + Hits 18585 18606 +21 + Misses 5253 5241 -12 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `92.47% <100.00%> (+0.96%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.82% <100.00%> (+1.95%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <100.00%> (+2.31%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.38% <0.00%> (-1.19%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (ø)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=footer). Last update [b28b537...8f87f25](https://codecov.io/gh/huggingface/transformers/pull/5056?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,055
closed
How can I load the finetuned BART model to memory?
I have finetuned a `facebook/bart-large` model following the example here: https://github.com/huggingface/transformers/blob/master/examples/summarization/finetune.py As an output I got a `checkpointcheckpoint_ckpt_epoch_0.ckpt` file. How can I create a BartForConditionalGeneration instance with updated weights?
06-16-2020 11:47:57
06-16-2020 11:47:57
Duplicate of #4144 <|||||>Thanks!
transformers
5,054
closed
Add pad_to_multiple_of on tokenizers (reimport)
Reimported from #4731. Introduce `pad_to_multiple_of` on both slow and fast tokenizers. This parameter introduces the "bucketizaton behaviour" also refered as Shape Polymorphism. This is especially usefull when targetting NN dedicated accelerators such as: - NVidia Tensor Core (on >= Volta Architecture) - XLA (PyTorch TPU) - XLA (Jax / Flax) Bonus: - Fix RobertaTokenizer when input is empty `text[0].is_space()` would crash (#3608). Edit (@thomwolf): - updated to the new API - raise a `ValueError` if you want to truncation to a length which is not a multiple of `pad_to_multiple_of`
06-16-2020 11:37:40
06-16-2020 11:37:40
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=h1) Report > Merging [#5054](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/24f46ea3f3e5006ca38735306753a846a0823174&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5054/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5054 +/- ## ========================================== + Coverage 79.08% 79.09% +0.01% ========================================== Files 138 138 Lines 24078 24081 +3 ========================================== + Hits 19041 19047 +6 + Misses 5037 5034 -3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.48% <ø> (ø)` | | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <ø> (ø)` | | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <100.00%> (ø)` | | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.70% <100.00%> (+0.54%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=footer). Last update [24f46ea...449cba1](https://codecov.io/gh/huggingface/transformers/pull/5054?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,053
closed
T5 model for classification doesn't work properly for large number of classes.
T5 model works properly for 50 class classification but when we try for 100 classes it outputs empty string ("") for a large number of test data. Is this model suitable for multi-class classification with a large number of classes? If yes, what do you think might be the problem with what I am doing.
06-16-2020 10:56:30
06-16-2020 10:56:30
Hi, @HiteshVamshi , what you are trying is highly experimental, I haven't seen anyone using T5 for 100 class classification. So you'll probably need to experiment with it. I would like to know few more details 1) What is the size of your dataset 2) which version of t5 are you using (t5-small, t5-base, t5-large etc) 3) how many epochs <|||||>Hi, @patil-suraj , I was also experimenting with the T5 model. I got a good performance for 50 classes so tried with more. -The size of the dataset used was 65k. -I used T5-small. -I tested for 10 epochs. But there was no significant improvement after 3 epochs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Can we use the T5 model with only 2000 samples for classifications (not a lot of classes, just around 10)? what about binary classification?
transformers
5,052
closed
How to cosume movement-pruning .h5 models in QnA pipeline
# ❓ Questions & Help ## Details Hi, I am working on the movement pruning method for QnA models. I have performed all the [given steps](https://github.com/huggingface/transformers/tree/master/examples/movement-pruning) in order to generate the pruned model and then used [this notebook](https://github.com/huggingface/transformers/blob/master/examples/movement-pruning/Saving_PruneBERT.ipynb) to generate the .h5 model files. Though I am facing an issue with consuming this model with `QuestionAnsweringPipeline`. For loading the model; the config file is copied from `BertForQuestionAnswering`, as the pruning repo does not generate any config file. `BERT_PRUNED_PATH = SERIALIZATION_DIR+'/dbg' +'/BERT_Pruned/'` `config = BertConfig.from_json_file(BERT_PRUNED_PATH+'config.json')` *we used squad_sparse.h5 which is renamed to tf_model.h5* `model_BERT_pruned = TFBertForQuestionAnswering.from_pretrained(BERT_PRUNED_PATH+'tf_model.h5',config=config)` **or** `model_BERT_pruned = BertForQuestionAnswering.from_pretrained(BERT_PRUNED_PATH+'tf_model.h5',config=config,from_tf=True) ` config = { "architectures": [ "BertForQuestionAnswering" ], "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 30522 } For pipeline, ` fill_mask_qna_BERT_pruned = pipeline( "question-answering", model=model_BERT_pruned, tokenizer=tokenizer, framework="tf" ) ` Now, when I test the pipeline on questions and context, I get very random answers, probably because the model is not getting loaded in a proper fashion as it should be. @VictorSanh Can you share the instructions on how to load these .h5 format pruned model using Huggingface modules? Or is there any other way to consume the model.
06-16-2020 08:58:26
06-16-2020 08:58:26
Hello, I've only work with PyTorch for all the experiments for movement pruning (even though I've converted the checkpoints in the hub to their TF version). The instructions in the notebook show you how you can load a optimized version of the checkpoint (pruning+quantization) which was saved with hdf5 with an .h5 extension. It is not a tensorflow checkpoint. You would have to adapt the steps in the notebook to do it in tensorflow.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,051
closed
Fix LR decay in TF Trainer
This PR fixes mainly the issue #5045. It also provides a better alignment with the PT trainer with: - a `set_seed()` function - use the `logging_first_step` argument - better logging message when training and load from checkpoint
06-16-2020 08:44:58
06-16-2020 08:44:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=h1) Report > Merging [#5051](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9022ef021a56db975d25c7108cbd19d0dd399174&el=desc) will **increase** coverage by `0.84%`. > The diff coverage is `6.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5051/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5051 +/- ## ========================================== + Coverage 77.08% 77.93% +0.84% ========================================== Files 138 138 Lines 23841 23855 +14 ========================================== + Hits 18379 18592 +213 + Misses 5462 5263 -199 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5051/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `18.44% <6.66%> (-0.26%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5051/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5051/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (+0.14%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5051/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5051/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.92% <0.00%> (+75.00%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=footer). Last update [9022ef0...ea9f19f](https://codecov.io/gh/huggingface/transformers/pull/5051?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Just rebase on master, should be ok to merge now, unless you have some other things you want me to change @LysandreJik ?
transformers
5,050
closed
TypeError: function() argument 1 must be code, not str
# 🐛 Bug TypeError: function() argument 1 must be code, not str ## Information @dataclass class T2TDataCollator(DataCollator):
06-16-2020 08:23:51
06-16-2020 08:23:51
I get this error when create datacollator class<|||||>Could you provide more information? It's a bit hard to help you here. What is the code you're using and what is the stacktrace?<|||||>thanks you! i have the same problem with this #5049
transformers
5,049
closed
DataCollator problem
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hi everybody I found an error in the following Colab: https://colab.research.google.com/drive/1jwXgtOXE8v8_qkiOCbjFQRFC5semK8T7?usp=sharing Specifically, As far I understand something changed with the implementation of the following snippet: class T2TDataCollator(DataCollator): def collate_batch(self, batch: List) -> Dict[str, torch.Tensor]: .......... <br> I got the following error: **TypeError: function() argument 1 must be code, not str** Can you suggest any workarounds?
06-16-2020 08:09:59
06-16-2020 08:09:59
i have the same. It is new bug. i run this week ago and worked<|||||>try this: ```python class T2TDataCollator: def __call__(self, batch): ``` <|||||>@abrozso Hi and thanks for the hint, however, it doesn't seem to fix the problem. I got the following error when the fine-tuning starts: 06/16/2020 09:03:23 - INFO - transformers.trainer - You are instantiating a Trainer but W&B is not installed. To use wandb logging, run `pip install wandb; wandb login` see https://docs.wandb.com/huggingface. 06/16/2020 09:03:23 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred. 06/16/2020 09:03:23 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred. 06/16/2020 09:03:23 - INFO - transformers.trainer - ***** Running training ***** 06/16/2020 09:03:23 - INFO - transformers.trainer - Num examples = 13 06/16/2020 09:03:23 - INFO - transformers.trainer - Num Epochs = 4 06/16/2020 09:03:23 - INFO - transformers.trainer - Instantaneous batch size per device = 8 06/16/2020 09:03:23 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64 06/16/2020 09:03:23 - INFO - transformers.trainer - Gradient Accumulation steps = 4 06/16/2020 09:03:23 - INFO - transformers.trainer - Total optimization steps = 0 Exception in thread Thread-12: Traceback (most recent call last): File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/usr/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 141, in _loader_worker _, data = next(data_iter) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 352, in __next__ data = self._next_data() File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 392, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) TypeError: 'T2TDataCollator' object is not callable <|||||>@antoniomastro1996: perhaps you can try the xla nightly version (if you are not using that already)<|||||>@abrozso unfortunately, I'm already using the nightly version<|||||>You need to instantiate your `T2TDataCollator`: `data_collator = T2TDataCollator()` (or you could make it a simple function if you don't need any state). Will fix the backward-compatibility this morning.<|||||>The issue is that it is not a class anymore.<|||||>Yes, that will stay. Just remove the subclass to `DataCollator` and everything should work: ``` class MyDataCollator: def __call__(self, features): ... ``` or (once #5060 is merged) ``` class MyDataCollator: def collate_batch(self, features): ... ``` but this will throw a deprecation warning.<|||||>I respectfully disagree with the decision to keep `DataCollator` as a callable. Very many existing trainers and notebooks will break as a result. I think many people would agree that it would be best to create a `DataCollatorCallable` callable or something similar as an addition, not as a replacement.<|||||>Hi all, I'm facing issues with this part of the code (post making changes as suggested above) in T5-Base for QA. ``` import dataclasses import logging import os import sys from dataclasses import dataclass, field from typing import Dict, List, Optional import numpy as np import torch from transformers import T5ForConditionalGeneration, T5Tokenizer, EvalPrediction from transformers import ( HfArgumentParser, DataCollator, Trainer, TrainingArguments, set_seed, ) logger = logging.getLogger(__name__) # prepares lm_labels from target_ids, returns examples with keys as expected by the forward method # this is necessacry because the trainer directly passes this dict as arguments to the model # so make sure the keys match the parameter names of the forward method @dataclass class T2TDataCollator: #(DataCollator) def __call__(self, batch: List) -> Dict[str, torch.Tensor]: # """ Take a list of samples from a Dataset and collate them into a batch. Returns: A dictionary of tensors """ input_ids = torch.stack([example['input_ids'] for example in batch]) lm_labels = torch.stack([example['target_ids'] for example in batch]) lm_labels[lm_labels[:, :] == 0] = -100 attention_mask = torch.stack([example['attention_mask'] for example in batch]) decoder_attention_mask = torch.stack([example['target_attention_mask'] for example in batch]) return { 'input_ids': input_ids, 'attention_mask': attention_mask, 'lm_labels': lm_labels, 'decoder_attention_mask': decoder_attention_mask } ``` **Which is fetching this error:-** ``` Exception in thread Thread-12: Traceback (most recent call last): File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/usr/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 133, in _loader_worker _, data = next(data_iter) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 517, in __next__ data = self._next_data() File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 557, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "<ipython-input-7-7b8c1b4d4c9a>", line 36, in __call__ lm_labels = torch.stack([example['target_ids'] for example in batch]) File "<ipython-input-7-7b8c1b4d4c9a>", line 36, in <listcomp> lm_labels = torch.stack([example['target_ids'] for example in batch]) KeyError: 'target_ids' ``` My train and validation dataset has 'target_ids' field (read from `datasets.Dataset.from_pandas()` method and mapped the `add_eos_to_examples` and `convert_to_features` successfully): `train_dataset['target_ids']` ``` tensor([[ 1027, 9533, 3440, ..., 0, 0, 0], [ 7327, 1387, 11597, ..., 0, 0, 0], [ 272, 5, 7130, ..., 0, 0, 0], ..., [15810, 1, 0, ..., 0, 0, 0], [ 7107, 1, 0, ..., 0, 0, 0], [ 454, 5, 134, ..., 0, 0, 0]]) ``` `valid_dataset['target_ids']` ``` tensor([[15810, 1, 0, ..., 0, 0, 0], [ 4190, 4329, 1, ..., 0, 0, 0], [ 4329, 11, 7107, ..., 0, 0, 0], ..., [ 3, 4, 1, ..., 0, 0, 0], [ 3, 4, 1, ..., 0, 0, 0], [ 8642, 4425, 9, ..., 0, 0, 0]]) ``` I am unable to fetch this field using `class T2TDataCollator:`. Please assist, thank you!
transformers
5,048
closed
After I resume learning, loss is greater than prev checkpoint
# ❓ Questions & Help ## Details I learned albert mlm model. Often an error occurs (memory exception, ... ) so i resumed learning model from last checkpoint but, training loss goes back to the beginning. what do i need to check ? thank, ![loss ](https://user-images.githubusercontent.com/4244158/84736498-ed7abb00-afe0-11ea-9b30-9e669e327241.png)
06-16-2020 05:52:31
06-16-2020 05:52:31
Hi! What did you use to train your model? Was it the `run_language_modeling` script? Do you happen to have the command you used?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,047
closed
How to use 16 token types in pretrained Albert/BERT?
I have a dialogue task and I use token type to distinguish the diffenrent state of the different speeches, but all the pretrained models I can find are of type_vocab_size=2. To accomplish my goal, I have to rewrite many codes in a dirty way. So I want to ask is there an elegant way to restore the pretrained weights and ignore the token type embeddings at the same time? Roughly modifying the type_vocab_size in the given config.json will certainly raise an error
06-16-2020 03:03:49
06-16-2020 03:03:49
Maybe I don't get it, but can't you simply do: ``` from transformers.modeling_bert import BertConfig, BertModel bconfig = BertConfig.from_pretrained('bert-base-uncased') bconfig.type_vocab_size = 16 model = BertModel(bconfig) model.parameters # ... # (token_type_embeddings): Embedding(16, 768) # ... ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> Maybe I don't get it, but can't you simply do: > > ``` > from transformers.modeling_bert import BertConfig, BertModel > > bconfig = BertConfig.from_pretrained('bert-base-uncased') > bconfig.type_vocab_size = 16 > model = BertModel(bconfig) > model.parameters > # ... > # (token_type_embeddings): Embedding(16, 768) > # ... > ``` Yes. What I can do is to reassign the token type embeddings after init, and thing is if there is any risk to do this. But I don't continue on this because my dialogue task is too difficult for almost language model even with BERT hahhhhh<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,046
closed
ref #4733
# 🐛 Bug ## Information Model I am using TFBertEncoder: Language I am using the model on English: The problem arises when using: * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. When I use, TFBertEncoder, I get an error. Here is my code. ```python import tensorflow as tf import numpy as np from transformers.modeling_tf_bert import BertConfig, TFBertEncoder print(tf.__name__, tf.__version__) input_a = tf.keras.layers.Input(shape=(91, 128)) config = BertConfig() config.hidden_size = 128 config.num_attention_heads = 4 # config.output_attentions = False # config.output_hidden_states = False head_mask = [None for _ in range(config.num_hidden_layers)] encoder_output = TFBertEncoder(config=config)([input_a, None, head_mask])[0] print(encoder_output.shape) test_out = tf.keras.layers.Dense(128)(encoder_output) print(test_out.shape) ``` ## Expected behavior Here is the error: ``` (None, 91, 128) 2020-06-03 11:18:10.160647: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Failed precondition: Error while reading resource variable _AnonymousVar189 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar189/class tensorflow::Var does not exist. [[{{node output_23/dense/BiasAdd/ReadVariableOp}}]] Traceback (most recent call last): File "D:/python/tx/TEST.py", line 16, in <module> a = tf.keras.layers.Dense(128)(encoder_output) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 720, in __call__ base_layer_utils.create_keras_history(inputs) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 187, in create_keras_history _, created_layers = _create_keras_history_helper(tensors, set(), []) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper layer_inputs, processed_ops, created_layers) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper layer_inputs, processed_ops, created_layers) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper layer_inputs, processed_ops, created_layers) [Previous line repeated 5 more times] File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 247, in _create_keras_history_helper constants[i] = backend.function([], op_input)([]) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\backend.py", line 3727, in __call__ outputs = self._graph_fn(*converted_inputs) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1551, in __call__ return self._call_impl(args, kwargs) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1591, in _call_impl return self._call_flat(args, self.captured_inputs, cancellation_manager) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1692, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 545, in call ctx=ctx) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable _AnonymousVar189 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar189/class tensorflow::Var does not exist. [[node output_23/dense/BiasAdd/ReadVariableOp (defined at /python/tx/TEST.py:16) ]] [Op:__inference_keras_scratch_graph_5205] Function call stack: keras_scratch_graph ``` ## Environment info * `transformers` version: 2.3.0 (in conda list) * Platform: * Python version:3.7 * PyTorch version (GPU?): * Tensorflow version (GPU?):TF2.1.0(GPU) * Using GPU in script?: * Using distributed or parallel set-up in script?:No
06-16-2020 03:00:37
06-16-2020 03:00:37
@patrickvonplaten hello, Is there a way to solve this problem<|||||>Closing since a duplicate of https://github.com/huggingface/transformers/issues/4733.
transformers
5,045
closed
TFTrainer does not consider number of epochs when calculating learning rate
# 🐛 Bug `TFTrainer` does not consider number of epochs when calculating learning rate ## Information When using `TFTrainer`, learning rate decreases to 0 at the end of the first epoch, even when we want to train on multiple epochs. The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: run_tf_glue with mrpc task * [ ] my own task or dataset: (give details below) ## To reproduce `python run_tf_glue.py --model_name_or_path bert-base-cased --task_name MRPC --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/test_hf/ --overwrite_output_dir --logging_dir hf --evaluate_during_training --eval_steps 50 --logging_steps 10` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> This is what we get, where learning rate is null after the first epoch: ![image](https://user-images.githubusercontent.com/715491/84722988-39c6ec00-af4a-11ea-9c80-f2213d5a579c.png) You can refer to [W&B run](https://app.wandb.ai/borisd13/huggingface/runs/2zcsfumy?workspace=user-borisd13) for more details. ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Learning rate should slowly decrease until end of 3rd epoch. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-5.3.0-53-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): N/A - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: Yes (one only) - Using distributed or parallel set-up in script?: No @jplu I recorded the issue here so we don't forget to fix it. Let me know if I can help.
06-16-2020 01:55:05
06-16-2020 01:55:05
This is fixed
transformers
5,044
closed
refactor(wandb): consolidate import
This PR consolidates the import logic of wandb as suggested [here](https://github.com/huggingface/transformers/pull/4946#discussion_r440070708)
06-16-2020 01:43:03
06-16-2020 01:43:03
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=h1) Report > Merging [#5044](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f9f8a5312e92541ff9a5f483fc4907ec87da876e&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `64.28%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5044/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5044 +/- ## ========================================== + Coverage 77.39% 77.40% +0.01% ========================================== Files 130 130 Lines 22018 22014 -4 ========================================== Hits 17041 17041 + Misses 4977 4973 -4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5044/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `78.26% <50.00%> (-21.74%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5044/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.52% <100.00%> (-0.06%)` | :arrow_down: | | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5044/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `18.69% <100.00%> (-0.30%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=footer). Last update [f9f8a53...47b9975](https://codecov.io/gh/huggingface/transformers/pull/5044?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks!
transformers
5,043
closed
Fix marian tokenizer save pretrained
06-16-2020 00:18:18
06-16-2020 00:18:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=h1) Report > Merging [#5043](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/36434220fc807c5015bc8f0f1e50ab21f7d34914&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5043/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5043 +/- ## ======================================= Coverage 77.36% 77.37% ======================================= Files 130 130 Lines 21989 21990 +1 ======================================= + Hits 17012 17014 +2 + Misses 4977 4976 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5043/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.85% <100.00%> (+0.96%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=footer). Last update [3643422...5899f7a](https://codecov.io/gh/huggingface/transformers/pull/5043?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,042
closed
❓ [TFTrainer] How to run on 8 TPU cores ?
# ❓ Questions & Help I'm trying to run TFTrainer on 8 TPU cores, but I don't understand how to make it work. I tried running my script with the flags `--tpu_num_cores 8 --per_device_train_batch_size 8`, expecting each core to handle a batch size of 1. But when I print the shape of my inputs, I have `[8, x]` instead of `[1, x]`, which leads to memory error. --- If I start the training with `--tpu_num_cores 8 --per_device_train_batch_size 1`, the shape of inputs is correct (`[1, x]`), but the number of optimization steps computed is not correct (If I have 8k samples, it says I have 8k optimization steps, but I expected 1k steps because I am using 8 TPU cores...). --- Am I doing something wrong ? **How can I train on 8 TPU cores, with a batch size of 1 for each core ?**
06-16-2020 00:08:22
06-16-2020 00:08:22
Hello ! This is because the `--tpu_num_cores` is not taken into account yet. If you want to use TPUs, just fill the TPU name with `--tpu_name` and it will detect automatically the number of cores. For now TPUs with TF Trainer is under development so some use cases might not work properly.<|||||>Thanks for the input @jplu So as you mentioned the number of TPU cores is automatically detected. It is accessible with `training_args.n_gpu`. Also I didn't notice but here : https://github.com/huggingface/transformers/blob/e4aaa4580515446cd5a2972ab42fec0b95819c84/src/transformers/training_args.py#L150 The batch size is automatically adjusted to the number of cores. So the behavior I observed is completely normal, as `--per_device_train_batch_size` is the batch size **per TPU cores**.
transformers
5,041
closed
How can I use tokenizer.encode_plus to input and encode 2 sentences - (query,answer) pair for training a BERT binary classifier?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-15-2020 23:37:12
06-15-2020 23:37:12
Here my intention is to train a BERT binary classifier which classifies if an answer corresponding to a given query is correct. How do I proceed to encode the query, answer pair in the Input?<|||||>@soumya-ranjan-sahoo You could check the [token type ids](https://huggingface.co/transformers/glossary.html#token-type-ids). Please feed the question and answer to the, for instance, encode_plus function and generate type type ids. Then you could just feed the token type ids with, for instance, input ids and attention masks to conduct classification<|||||>@bright1993ff66 Great. I was successful in my experiment. But now I have a follow-up question. I understand the maximum length of the permissible words for BERT is 512. In my case (sentence pair classification) does that imply the combined word length for the query and the answer has to be 512 since I have huge answers for my experiment. Surprisingly I was able to fine-tune BERT with query and answer (most query-answer pair have a combined word length of more than 512), and BERT didn't throw any error or warnings. How did it fine-tune or what it exactly did with such sentences? Thank you. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,040
closed
"AutoTokenizer.from_pretrained" does not work when loading a pretrained MarianTokenizer from a local directory
# 🐛 Bug ## Information I want to save MarianConfig, MarianTokenizer, and MarianMTModel to a local directory ("my_dir") and then load them: > import transformers > > transformers.AutoConfig.from_pretrained("Helsinki-NLP/opus-mt-en-de").save_pretrained("my_dir") > transformers.AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de").save_pretrained("my_dir") > transformers.AutoModelWithLMHead.from_pretrained("Helsinki-NLP/opus-mt-en-de").save_pretrained("my_dir") > > config = transformers.AutoConfig.from_pretrained("my_dir") > tokenizer = transformers.AutoTokenizer.from_pretrained("my_dir") > model = transformers.AutoModelWithLMHead.from_pretrained("my_dir") But the above code failed when loading the saved MarianTokenizer from "my_dir": > Traceback (most recent call last): > File "<input>", line 8, in <module> > File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_auto.py", line 206, in from_pretrained > return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) > File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 911, in from_pretrained > return cls._from_pretrained(*inputs, **kwargs) > File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 1062, in _from_pretrained > tokenizer = cls(*init_inputs, **init_kwargs) > File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_marian.py", line 83, in __init__ > self.spm_source = load_spm(source_spm) > File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_marian.py", line 236, in load_spm > spm.Load(path) > File "/Users/anaconda/lib/python3.6/site-packages/sentencepiece.py", line 367, in Load > return self.LoadFromFile(model_file) > File "/Users/anaconda/lib/python3.6/site-packages/sentencepiece.py", line 177, in LoadFromFile > return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) > TypeError: not a string >
06-15-2020 22:51:09
06-15-2020 22:51:09
I noticed that after saving the pretrained MarianTokenizer to "my_dir", the "source.spm" file and "target.spm" file are actually named as: > 1bec78f268e25152d11e6efa41998f2ebebe3ce5452c952c90fc7264c8c45a5b.23f506277c63e64e484c4de9d754a6625e5ba734bb6153470be9b7ffdb7c4ac5 and > 5f95a1efcd8b6093955eb77d42cf97bde71563395863991bd96ad0832776f409.52488b746595fe55ab4afaebb1c23e29994354ddfebd6eddb77815395dc1d604 When I changed the file names back to "source.spm" and "target.spm", the error disappears.<|||||>I figured it out! The spm files are coming from the cache. So their names are not human readable! Fixed by tomorrow.<|||||>Thanks a lot... Will this fix be included in the next release?<|||||>Yes!<|||||>Same issue exists for `albert` models also<|||||>Please make a new issue with instructions to reproduce. Thanks!<|||||>Did you ever solve this for Albert models? @mittalsuraj18
transformers
5,039
closed
Ability to pickle/unpickle BatchEncoding pickle (reimport)
Overrides the methods get_state() & set_state() to (respectively) export the content of the underlying data dictionnary and - if defined - the content of encodings. Unittests added to covert the serialization & deserialization of all the exported properties. Reimported from #4515
06-15-2020 22:38:12
06-15-2020 22:38:12
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=h1) Report > Merging [#5039](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/36434220fc807c5015bc8f0f1e50ab21f7d34914&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5039/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5039 +/- ## ======================================= Coverage 77.36% 77.37% ======================================= Files 130 130 Lines 21989 21998 +9 ======================================= + Hits 17012 17021 +9 Misses 4977 4977 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5039/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `91.69% <100.00%> (+0.13%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5039/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5039/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=footer). Last update [3643422...5e40fe4](https://codecov.io/gh/huggingface/transformers/pull/5039?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,038
closed
Cannot save and load pretrained MarianTokenizer
# 🐛 Bug ## Information I want to save a pretrained MarianTokenizer to a local directory ("my_dir") and then load it: > import transformers > transformers.AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de").save_pretrained("my_dir") > tokenizer = transformers.AutoTokenizer.from_pretrained("my_dir") But the above code failed: > Traceback (most recent call last): > File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_utils.py", line 239, in get_config_dict > local_files_only=local_files_only, > File "/Users/anaconda/lib/python3.6/site-packages/transformers/file_utils.py", line 267, in cached_path > raise EnvironmentError("file {} not found".format(url_or_filename)) > OSError: file my_dir/config.json not found > During handling of the above exception, another exception occurred: > Traceback (most recent call last): > File "<input>", line 1, in <module> > File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_auto.py", line 195, in from_pretrained > config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) > File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_auto.py", line 196, in from_pretrained > config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) > File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_utils.py", line 252, in get_config_dict > raise EnvironmentError(msg) > OSError: Can't load config for 'my_dir'. Make sure that: > - 'my_dir' is a correct model identifier listed on 'https://huggingface.co/models' > - or 'my_dir' is the correct path to a directory containing a config.json file >
06-15-2020 22:25:22
06-15-2020 22:25:22
#4371
transformers
5,037
closed
The correct way to save and load pretrained MarianTokenizer?
I want to save a pretrained MarianTokenizer to a local directory ("my_dir") and then load it: > import transformers > transformers.AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de").save_pretrained("my_dir") > tokenizer = transformers.AutoTokenizer.from_pretrained("my_dir") But the above code failed: > Traceback (most recent call last): > File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_utils.py", line 239, in get_config_dict > local_files_only=local_files_only, > File "/Users/anaconda/lib/python3.6/site-packages/transformers/file_utils.py", line 267, in cached_path > raise EnvironmentError("file {} not found".format(url_or_filename)) > OSError: file my_dir/config.json not found > During handling of the above exception, another exception occurred: > Traceback (most recent call last): > File "<input>", line 1, in <module> > File "/Users/anaconda/lib/python3.6/site-packages/transformers/tokenization_auto.py", line 195, in from_pretrained > config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) > File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_auto.py", line 196, in from_pretrained > config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) > File "/Users/anaconda/lib/python3.6/site-packages/transformers/configuration_utils.py", line 252, in get_config_dict > raise EnvironmentError(msg) > OSError: Can't load config for 'my_dir'. Make sure that: > - 'my_dir' is a correct model identifier listed on 'https://huggingface.co/models' > - or 'my_dir' is the correct path to a directory containing a config.json file >
06-15-2020 22:23:40
06-15-2020 22:23:40
#4371
transformers
5,036
closed
Refactor Code samples; Test code samples
Refactoring the code samples in order to prevent copy/pasting the same code samples across classes while updating the model/tokenizer classes and checkpoint names. - All models now have their docstrings updated. - Doctest is used for testing - Fixed a bunch of bugs in all docstrings as well as a few models. All non-cosmetic changes are highlighted below.
06-15-2020 22:17:06
06-15-2020 22:17:06
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=h1) Report > Merging [#5036](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/24f46ea3f3e5006ca38735306753a846a0823174&el=desc) will **increase** coverage by `0.22%`. > The diff coverage is `97.44%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5036/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5036 +/- ## ========================================== + Coverage 79.08% 79.30% +0.22% ========================================== Files 138 138 Lines 24078 24265 +187 ========================================== + Hits 19041 19243 +202 + Misses 5037 5022 -15 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (ø)` | | | [src/transformers/configuration\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | | | [src/transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21vYmlsZWJlcnQucHk=) | `97.05% <ø> (ø)` | | | ... and [50 more](https://codecov.io/gh/huggingface/transformers/pull/5036/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=footer). Last update [24f46ea...a9bb134](https://codecov.io/gh/huggingface/transformers/pull/5036?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This is amazing! This way we won't do as many mistakes while copy-pasting code for introducing those task-specific models :-)
transformers
5,035
closed
update for roberta and xlm
Two changes are done: Update the inputs with langs during training and evaluation. Update the token_type_ids for Roberta otherwise it throws an error while creating features.
06-15-2020 22:04:31
06-15-2020 22:04:31
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,034
closed
Training & fine-tuning quickstart
This PR adds a short guide to the docs demonstrating how to train & fine-tune models in native PyTorch, native TF2, and using Trainer. My aim was not to show how to train on every type of task, but simply to communicate the key points along with a couple of very simple and easy-to-follow examples. More involved examples spanning many tasks are linked at the bottom.
06-15-2020 21:56:07
06-15-2020 21:56:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=h1) Report > Merging [#5034](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/24f46ea3f3e5006ca38735306753a846a0823174&el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5034/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5034 +/- ## ========================================== + Coverage 79.08% 79.10% +0.02% ========================================== Files 138 138 Lines 24078 24078 ========================================== + Hits 19041 19046 +5 + Misses 5037 5032 -5 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.77% <0.00%> (+0.14%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=footer). Last update [24f46ea...3ffd35b](https://codecov.io/gh/huggingface/transformers/pull/5034?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@patrickvonplaten This is a little bit less verbose than what I think you were envisioning but curious what you think. I just showed sequence classification to communicate the general principles rather than covering multiple different tasks, which would make things pretty long.<|||||>I think it's great! At the moment we can't really show causal lm training because it's not implemented yet in TF :D So sequence classification sounds good to me! In the longer term, think we should have one section for each model type for both TF and PT: - Causal LM - Masked LM - Seq2Seq - Seq classifaction .... But for now I like it! <|||||>I don't think, the section would become too long if we show training for every model type (CLM, MLM, Seq2Seq, ...). If we would start with CLM and in the following sections only add a couple of sentences explaining what should be done differently for MLM *e.g.* and so on I don't think the page becomes too long. Nevertheless, I'm still wondering if we should have a training section on each model page since training can differ quite a lot between models: XLNet has a very special training scheme, T5 pretraining is different from Bart, Longformer has a special global attention that has to be set, ... => what do you think about this @sgugger @joeddav ? <|||||>I personally think this is okay if each task is shown in a different notebook/tutorial (there is a big table of tasks after all). When we are at a point where all tasks can be easily loaded in a few lines of code we can maybe show more, but I fear that the specificity of each task/dataset requiring its own preprocessing function will lose the reader when the essential point of this (beginner's) tutorial is on training and Trainer/TFTrainer. Having an example on each model page may also be problematic since models can be used for several tasks. So it might turn up in having way too many things in the docs as well. For now I think making more independent notebooks that show how to train/fine-tune a model on a given task and link to those in all the right places might be the best solution. That way the reader opts in to see this model trained on that task.<|||||>I think it's a legitimate question how much guidance we should give for more obscure cases that you mentioned, @patrickvonplaten. My feeling is that those are fairly specialized and it's fair to expect users to be able to figure out more advanced cases like that out between docs/source code/table of tasks. I wouldn't be opposed to incorporating some of the more common tasks here (e.g. MLM training), but I generally agree with @sgugger to err on the side of brevity and clarity for the purpose of a quickstart guide like this one. Then we can lean on a combination of model docs and the big table of tasks for the more obscure/specialized cases. Also, it would help to have docs for Trainer.
transformers
5,033
closed
update for roberta and xlm
Two changes are done: 1. Update the inputs with langs during training and evaluation. 2. Update the token_type_ids for Roberta otherwise it throws an error while creating features.
06-15-2020 21:52:41
06-15-2020 21:52:41
check_code_quality failed. Updating and sending a new request.
transformers
5,032
closed
Add DistilBertForMultipleChoice
Another missing model
06-15-2020 21:40:51
06-15-2020 21:40:51
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=h1) Report > Merging [#5032](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bbad4c6989d489097f42bbe38001a3f8ca1c5c11&el=desc) will **increase** coverage by `0.03%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5032/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5032 +/- ## ========================================== + Coverage 77.19% 77.22% +0.03% ========================================== Files 128 128 Lines 21877 21906 +29 ========================================== + Hits 16888 16918 +30 + Misses 4989 4988 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | | | [src/transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.58% <ø> (ø)` | | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.70% <100.00%> (+0.20%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=footer). Last update [bbad4c6...fdafefb](https://codecov.io/gh/huggingface/transformers/pull/5032?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,031
closed
Some changes to simplify the generation function
This PR proposes to simplify generation in `modeling_utils.py` in the following ways: 1. Removing some redundant code in `_generate_no_beam_search`: finished sequences are padded at generation time, and do not need to be padded again before returning 2. Initializes the cache in its permanent form directly for both `_generate_beam_search` and `_generate_no_beam_search`: this removes the need for a first step test in `modeling_bart.py` and `modeling_t5.py` (and presumably future cached seq2seq architectures) 3. Took all of the logit post-processing out of `_generate_beam_search` and `_generate_no_beam_search` and put it in a single `finalize_generation_logscores` function that can be used in both instead of duplicating the code The following slow test pass in addition to the basic suite: ``` RUN_SLOW=1 pytest tests/test_modeling_bart.py RUN_SLOW=1 pytest tests/test_modeling_gpt2.py RUN_SLOW=1 pytest tests/test_modeling_t5.py ``` #### Small breaking changes 1. The previous versionof `_generate_no_beam_search` seemed to be adding padding twice. We removed is as is seemed redundant, but noting it here just in case. 2. This PR moves the addition of the CTRL length penalty from before to after the `log_softmax` in `_generate_beam_search`. This changes scores a little bit but apparently doesn't drastically alter model behavior. Per @patrickvonplaten : > Now, the only thing that will slightly change with this function is that the CTRL enforce penalty for beam search decoding. I tried the new order out on a couple of tensors and the changes are minimal. Also, the repetition penalty is very hacky anyways and we already changed the function from its original formula of the CTRL paper.
06-15-2020 20:53:45
06-15-2020 20:53:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=h1) Report > Merging [#5031](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7291ea0bff57a017e71b1ea8ec01ff19da298bf0&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `90.32%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5031/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5031 +/- ## ========================================== + Coverage 77.24% 77.26% +0.01% ========================================== Files 133 133 Lines 22146 22128 -18 ========================================== - Hits 17107 17097 -10 + Misses 5039 5031 -8 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <88.88%> (+0.32%)` | :arrow_up: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.25% <100.00%> (-0.02%)` | :arrow_down: | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <100.00%> (ø)` | | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `84.16% <100.00%> (-0.07%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (+0.62%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=footer). Last update [7291ea0...f507cd7](https://codecov.io/gh/huggingface/transformers/pull/5031?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Okey I took a deeper look at the PR! Quite hard to see what's all going on there, so a nice refactoring would be very welcome :-). I very much like the small changes that are done here that clearly improve the readability and clean the code. The big change of unifying the score computation I am not yet 100% convinced it helps a lot. I agree that we should refactor the generate() function, but not sure whether this is going in the right direction. Pro's - The function reduces duplicated code Con's - The function adds more computational cost to the `.generate()` function, which is already quite expensive and heavily used in GPT2 by an additional softmax function. Thinking about how big the output embedding matrices are, this could be significant no? I do agree though that we should unify all these functions that are applied to the scores. IMO, we have to very careful with everything that touches sampling in `_no_generate_beam_search` (greedy decoding is less used here) and everything that touches `argmax` in `_generate_beam_search` (summarization and translation rely on that). My proposal would be the following: Let's unify all functions that are applied after the `F.log_softmax(next_token_logits, dim=-1)` line into one function and that expects the scores for beam search and the logits for no beam search (I like @sshleifer's naming - I would just say `postprocess_next_token_scores`). This function should be independent of wheter we sample or use the argmax: ```python def postprocess_next_token_scores( self, scores, input_ids, batch_size, num_beams, no_repeat_ngram_size, bad_words_ids, cur_len, min_length, eos_token_id, repetition_penalty, ): # repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858) if repetition_penalty != 1.0: self.enforce_repetition_penalty_( next_token_logits, batch_size, num_beams, input_ids, repetition_penalty, ) # set eos token prob to zero if min_length is not reached if eos_token_id is not None and cur_len < min_length: scores[:, eos_token_id] = -float("inf") if no_repeat_ngram_size > 0: # calculate a list of banned tokens to prevent repetitively generating the same ngrams num_batch_hypotheses = batch_size * num_beams # from fairseq: https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345 banned_batch_tokens = calc_banned_ngram_tokens( input_ids, num_batch_hypotheses, no_repeat_ngram_size, cur_len ) for i, banned_tokens in enumerate(banned_batch_tokens): scores[i, banned_tokens] = -float("inf") if bad_words_ids is not None: # calculate a list of banned tokens according to bad words banned_tokens = calc_banned_bad_words_ids(input_ids, bad_words_ids) for i, banned_tokens in enumerate(banned_tokens): scores[i, banned_tokens] = -float("inf") return scores ``` This way for 1) we have a function that is applicable for both sampling and greedy search and only should contain function that do so and 2) there is no additional computation cost for the softmax.<|||||>The temperature function should then for both beam search and no beam search only be applied in the `if do_sample=True` statement (no need to do this for argmax). Now, the only thing that will slightly change with this function is that the CTRL enforce penalty for beam search decoding. I tried the new order out on a couple of tensors and the changes are minimal. Also, the repetition penalty is very hacky anyways and we already changed the function from its original formula of the CTRL paper. Also, I haven't seen that anybody used the function really for beam search. What do you think @sshleifer?.<|||||>`repetition_penalty!=1` only used by CTRL, `model_cards/mrm8488/t5-base-finetuned-summarize-news/README.md:` and `model_cards/gaochangkuan/model_dir/README.md` So I think minimal changes to when it is computed are fine.<|||||>I changed the `finalize_logits` to @patrickvonplaten 's suggested `postprocess_next_token_scores` Following @sshleifer 's sage advice, I'm leaving the BART starting hack and the temperature in the main body of the generate function for now, will leave dealing with those for a future PR :) <|||||>Great, I'm happy with the PR - I think it already makes generate a bunch more readable. Can we note in the PR description, that we have slight breaking changes for beam search sampling when running with the repetition penalty? and it would be nice to make the function call more robust by using keyword arguments the same way it is done with `_no_beam_search_generate()` <|||||>> Great, I'm happy with the PR - I think it already makes generate a bunch more readable. > Can we note in the PR description, that we have slight breaking changes for beam search sampling when running with the repetition penalty? and it would be nice to make the function call more robust by using keyword arguments the same way it is done with `_no_beam_search_generate()` Done and done, will merge today.<|||||>In future, we also need to run ```bash pytest RUN_SLOW=1 tests/test_modeling_marian.py ```
transformers
5,030
closed
Update pipeline examples to doctest syntax
Also fix the values to what's returned. This way we can run ``` python -m doctest README.md ``` to test this codes produces the exact same results. Not sure if we want to include this in our CI in some way.
06-15-2020 20:28:44
06-15-2020 20:28:44
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=h1) Report > Merging [#5030](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bbad4c6989d489097f42bbe38001a3f8ca1c5c11&el=desc) will **decrease** coverage by `0.07%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5030/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5030 +/- ## ========================================== - Coverage 77.19% 77.12% -0.08% ========================================== Files 128 128 Lines 21877 21877 ========================================== - Hits 16888 16872 -16 - Misses 4989 5005 +16 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.34% <0.00%> (-0.24%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=footer). Last update [bbad4c6...db3cb29](https://codecov.io/gh/huggingface/transformers/pull/5030?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I didn't know this feature, this is pretty cool
transformers
5,029
closed
Add reference to NLP (package) dataset
06-15-2020 20:23:30
06-15-2020 20:23:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=h1) Report > Merging [#5029](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bbad4c6989d489097f42bbe38001a3f8ca1c5c11&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5029/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5029 +/- ## ========================================== - Coverage 77.19% 77.18% -0.01% ========================================== Files 128 128 Lines 21877 21877 ========================================== - Hits 16888 16886 -2 - Misses 4989 4991 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5029/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=footer). Last update [bbad4c6...0df885f](https://codecov.io/gh/huggingface/transformers/pull/5029?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I forgot to add the meta tag: datasets: - squad_v2 Sorry<|||||>added the metadata, @mrm8488
transformers
5,028
closed
Add reference to NLP dataset
06-15-2020 20:19:35
06-15-2020 20:19:35
I forgot to add the meta tag: datasets: - squad_v2 Sorry<|||||>same here
transformers
5,027
closed
Remove old doc page and add note about cache in installation
As discussed offline, removing the old page "Loading Google AI or OpenAI pre-trained weights or PyTorch dump" and move the not about cache in the installation folder (also, making that up to date ^^).
06-15-2020 20:10:07
06-15-2020 20:10:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=h1) Report > Merging [#5027](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bbad4c6989d489097f42bbe38001a3f8ca1c5c11&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5027/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5027 +/- ## ======================================= Coverage 77.19% 77.19% ======================================= Files 128 128 Lines 21877 21877 ======================================= + Hits 16888 16889 +1 + Misses 4989 4988 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=footer). Last update [bbad4c6...2d3587c](https://codecov.io/gh/huggingface/transformers/pull/5027?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,026
closed
Error while trying to retrieve BERT embeddings for a custom task
# 🐛 Bug ## Information I am using BERT [base-uncased]. Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Here's the error: ``` I0615 15:19:52.531945 140468956133120 file_utils.py:41] PyTorch version 1.1.0 available. I0615 15:19:53.723086 140468956133120 file_utils.py:57] TensorFlow version 2.0.0 available. bert-base-cased I0615 15:19:53.860575 140468956133120 configuration_utils.py:256] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json from cache at /users2/user1/.cache/torch/transformers/b945b69218e98b3e2c95acf911789741307dec43c698d35fad11c1ae28bda352.9da767be51e1327499df13488672789394e2ca38b877837e52618a67d7002391 I0615 15:19:53.861034 140468956133120 configuration_utils.py:292] Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": null, "do_sample": false, "eos_token_ids": null, "finetuning_task": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-12, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_beams": 1, "num_hidden_layers": 12, "num_labels": 2, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": true, "output_past": true, "pad_token_id": 0, "pruned_heads": {}, "repetition_penalty": 1.0, "temperature": 1.0, "top_k": 50, "top_p": 1.0, "torchscript": false, "type_vocab_size": 2, "use_bfloat16": false, "vocab_size": 28996 } I0615 15:19:53.934896 140468956133120 configuration_utils.py:256] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json from cache at /users2/user1/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517 I0615 15:19:53.935213 140468956133120 configuration_utils.py:292] Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": null, "do_sample": false, "eos_token_ids": null, "finetuning_task": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-12, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_beams": 1, "num_hidden_layers": 12, "num_labels": 2, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pad_token_id": 0, "pruned_heads": {}, "repetition_penalty": 1.0, "temperature": 1.0, "top_k": 50, "top_p": 1.0, "torchscript": false, "type_vocab_size": 2, "use_bfloat16": false, "vocab_size": 30522 } I0615 15:19:54.006190 140468956133120 tokenization_utils.py:501] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /users2/user1/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 I0615 15:19:54.133543 140468956133120 modeling_utils.py:461] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-pytorch_model.bin from cache at /users2/user1/.cache/torch/transformers/35d8b9d36faaf46728a0192d82bf7d00137490cd6074e8500778afed552a67e5.3fadbea36527ae472139fe84cddaa65454d7429f12d543d80bfc3ad70de55ac2 BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": null, "do_sample": false, "eos_token_ids": null, "finetuning_task": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-12, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_beams": 1, "num_hidden_layers": 12, "num_labels": 2, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": true, "output_past": true, "pad_token_id": 0, "pruned_heads": {}, "repetition_penalty": 1.0, "temperature": 1.0, "top_k": 50, "top_p": 1.0, "torchscript": false, "type_vocab_size": 2, "use_bfloat16": false, "vocab_size": 28996 } len(input_ids): 636 Num Batches: 40 --------------------------------------- --------------------------------------- --------------------------------------- Traceback (most recent call last): File "./dlatkInterface.py", line 2020, in <module> main() File "./dlatkInterface.py", line 1013, in main args.feattable = fe.addBERTTable_(modelName = args.bertmodel, aggregations=args.bertaggs, layersToKeep=args.bertlayers, noContext=args.bertnocontext, layerAggregations = args.bertlayeraggs, wordAggregations=args.transwordaggs, valueFunc = args.valuefunc) File "/users2/user1/NLP/dlatk/dlatk/featureExtractor.py", line 1327, in addBERTTable_ encAllLayers = model(input_ids = input_ids_padded, attention_mask = attention_mask_padded, token_type_ids = token_type_ids_padded) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/users2/user1/.local/lib/python3.5/site-packages/transformers/modeling_bert.py", line 783, in forward input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/users2/user1/.local/lib/python3.5/site-packages/transformers/modeling_bert.py", line 173, in forward inputs_embeds = self.word_embeddings(input_ids) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/sparse.py", line 117, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py", line 1506, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:193 ``` I am just trying to retrieve the embeddings from the layers that I want and store it in a list. Here's the code block that hits the error: ``` config = AutoConfig.from_pretrained(modelName, output_hidden_states=True) tokenizer = AutoTokenizer.from_pretrained(tokenizerName) model = AutoModel.from_pretrained(modelName, config=config) cuda = False model.eval() batch_size=16 . . . num_batches = int(np.ceil(len(input_ids)/batch_size)) encSelectLayers = [] print ('len(input_ids): ',len(input_ids)) print ('Num Batches:', num_batches) for i in range(num_batches): input_ids_padded = pad_sequence(input_ids[i*batch_size:(i+1)*batch_size], batch_first = True, padding_value=tokenizer.pad_token_id) token_type_ids_padded = pad_sequence(token_type_ids[i*batch_size:(i+1)*batch_size], batch_first = True, padding_value=0) attention_mask_padded = pad_sequence(attention_mask[i*batch_size:(i+1)*batch_size], batch_first = True, padding_value=0) if cuda: input_ids_padded = input_ids_padded.to('cuda') token_type_ids_padded = token_type_ids_padded.to('cuda') attention_mask_padded = attention_mask_padded.to('cuda') input_ids_padded = input_ids_padded.long() token_type_ids_padded = token_type_ids_padded.long() attention_mask_padded = attention_mask_padded.long() #print (input_ids_padded.shape, token_type_ids_padded.shape, attention_mask_padded.shape) #print (input_ids_padded) #print (token_type_ids_padded) #print (attention_mask_padded) print ('---------------------------------------') with torch.no_grad(): encAllLayers = model(input_ids = input_ids_padded, attention_mask = attention_mask_padded, token_type_ids = token_type_ids_padded) encAllLayers = encAllLayers[-1] for lyr in layersToKeep: #Shape: (num Layers, num_batches, batch_size, max Seq len, 768) encSelectLayers.append([encAllLayers[int(lyr)].detach().cpu().numpy()]) del encAllLayers attention_mask_padded.shape) print (np.array(encSelectLayers).shape) ``` ## Environment info - `transformers` version: 2.5.1 - Platform: Linux-4.4.0-171-generic-x86_64-with-debian-stretch-sid - Python version: 3.5.2 - CUDA version: 10.1 - PyTorch version (GPU?): 1.1 (True) - Tensorflow version (GPU?): 2.0 (True) - Using GPU in script: No - Using distributed or parallel set-up in script?: No
06-15-2020 19:28:43
06-15-2020 19:28:43
Hi! The error seems to come from your data processing. It's hard to help you without knowing what your inputs are, how `pad_sequence` works, and how you tokenize your inputs. In recent transformers versions, the tokenizer can take care of truncating/padding and do so for input ids, attention masks and token type ids. Using the encode/encode_plus methods to do this would reduce the risk of errors when pre-processing.<|||||>Thank you for your response @LysandreJik I would like to add a few more details pertaining to this error here. `pad_sequence` is the method from [torch.nn.utils.rnn](https://pytorch.org/docs/master/generated/torch.nn.utils.rnn.pad_sequence.html#torch-nn-utils-rnn-pad-sequence). With respect to using `encode/encode_plus`, we are processing the text into multiple segments when they go past the max tokens limit. Hence we would still need to process the output of `encode_plus` to fit it in the token limit. [Which is what we follow right now but using the `tokenize`, `convert_tokens_to_ids` and `create_token_type_ids_from_sequences`. This error doesn't occur when I replace the the three lines of loading config, tokenizer and model (using `AutoConfig`,` AutoTokeizer` and `AutoModel` respectively) with just `BertTokenizer` and `BertModel` directly. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,025
closed
Convert hans to Trainer
This follows up from #4854 (@julien-c I took all your comments into account) and will close #4742.
06-15-2020 19:27:49
06-15-2020 19:27:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=h1) Report > Merging [#5025](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bf4098e03afaed2c6e3671c69fd57e9ac304752&el=desc) will **decrease** coverage by `0.09%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5025/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5025 +/- ## ========================================== - Coverage 77.18% 77.09% -0.10% ========================================== Files 128 128 Lines 21877 21877 ========================================== - Hits 16886 16866 -20 - Misses 4991 5011 +20 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-1.40%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.73% <0.00%> (+0.23%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=footer). Last update [1bf4098...201fae2](https://codecov.io/gh/huggingface/transformers/pull/5025?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,024
closed
[Bart] Question Answering Model is added to tests
As written in PR #4908, the Bart for QA was not added to the test suite. This PR fixes the output attentions test for encoder decoder QA models. If we would have named tuples, such a test could be made much much cleaner.
06-15-2020 18:51:38
06-15-2020 18:51:38
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=h1) Report > Merging [#5024](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bf4098e03afaed2c6e3671c69fd57e9ac304752&el=desc) will **decrease** coverage by `0.76%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5024/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5024 +/- ## ========================================== - Coverage 77.18% 76.42% -0.77% ========================================== Files 128 128 Lines 21877 21877 ========================================== - Hits 16886 16719 -167 - Misses 4991 5158 +167 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `36.80% <0.00%> (-3.88%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.56% <0.00%> (-2.58%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.17% <0.00%> (-1.41%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `72.67% <0.00%> (-0.30%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=footer). Last update [1bf4098...f36c3eb](https://codecov.io/gh/huggingface/transformers/pull/5024?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,023
closed
Multi class classification using Reformer Model
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hi All: I am trying to implement a **multi-class classification** using the **ReformerModel, ReformerModelWithLMHead** But I don't see any API implementation for the same. I have 10+ class of text data and wanted to use the pre-trained ReformerModel, ReformerModelWithLMHead to classify the text. I see classes like RobertaForSequenceClassification have support for text classification but could not find any for Reformer. Please, let me if it is implemented in the Reformer model or is it work in progress. I tried to find any implementation for the same but could not find any. ps: I am referring to this paper https://web.stanford.edu/class/cs224n/reports/custom/report21.pdf where they have implemented text classification using Reformer. Thank you Amit <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-15-2020 18:09:10
06-15-2020 18:09:10
In the reformer projects there is only an card for QA task. Nothing about reformerforsequenceclassification or other model heads. @patrickvonplaten has assigned the tasks to himself. Are there any more steps to do when I want to pretrain reformer on larger datasets using the notebook ? Specifically we would like to train it on c4 dataset as described in the open tasks.<|||||>**@flozi00** thank you for the reply. So there is no way at this point that I can implement multi-class classification using Reformer? Or is there a workaround that I can use.<|||||>@as-stevens you could try to write the classification head yourself. At the moment there is no ready to use solution given.<|||||>There are currently no pretrained weights for a bidirectional reformer, so adding a QA model extension is not a high priority at the moment. PRs with a clean implementation of ReformerForQA would be welcome :-) <|||||>> @as-stevens you could try to write the classification head yourself. > At the moment there is no ready to use solution given. **@flozi00** could point some reference that I can use to write a classification head?<|||||>I have free computing capacities to train bidirectional reformer model on larger datasets, but no time to do so. Any advice or ready to use scripts to do so ? @patrickvonplaten <|||||>@flozi00 - this sounds nice! Let me come back to you on this. There is currently no script to do so, but I will think about it :-)<|||||>@patrickvonplaten we could talk about the details in private chat ? The development should not pause cause missing resources<|||||>> > @as-stevens you could try to write the classification head yourself. > > At the moment there is no ready to use solution given. > > **@flozi00** could point some reference that I can use to write a classification head? 1. https://github.com/ThilinaRajapakse/simpletransformers/blob/master/simpletransformers/custom_models/models.py 2. https://github.com/ThilinaRajapakse/simpletransformers/tree/master/simpletransformers/classification/transformer_models<|||||>> > > @as-stevens you could try to write the classification head yourself. > > > At the moment there is no ready to use solution given. > > > > > > **@flozi00** could point some reference that I can use to write a classification head? > > 1. https://github.com/ThilinaRajapakse/simpletransformers/blob/master/simpletransformers/custom_models/models.py > 2. https://github.com/ThilinaRajapakse/simpletransformers/tree/master/simpletransformers/classification/transformer_models ** @flozi00 ** Thank you for sharing the above the links, let me go through the links and try to understand the flow/architecture. I am new to this and would appreciate any other pointers as well.<|||||>**@flozi00** Please let me know who do I verify my changes. By creating the model on a classification data set or is there a way better way to get the code verified? I am new to this hence some of my questions may seem basic.<|||||>Just open an pull request with your changes. patrickvonplaten is the author of the implementation of reformer in this repository, I think he would have a look on it. Training the classification model on an dataset would be a good proof that it is working.<|||||>@patrickvonplaten I have implemented the ReformerForSequenceClassification and ReformerForClassificationHead. I have taken RobertaForSequenceClassification and other classification head as a reference. Further, I have not opened a pull request as I wanted to make sure that I have a working sample code before I raise a PR. Test before raising a PR. I am using the IMDB review dataset. The link to the collab; https://colab.research.google.com/drive/1KFsQxLqsMB6vBF4_bRmTFGhdGwkgx0zI?usp=sharing The 3rd cell has all the code related to the reformer classification head. Further, I am getting; **AssertionError: If training, make sure that config.axial_pos_shape factors: (128, 512) multiply to sequence length. Got prod((128, 512)) != sequence_length: 2048. You might want to consider padding your sequence length to 65536 or changing config.axial_pos_shape**. I looked at https://github.com/huggingface/transformers/issues/4565 but that looks like it is an LM model and could find any solution for the same. Any thoughts/suggestions?<|||||>From the docs ``` In practice, the parameter config.axial_pos_embds_dim is set to list(d1,d2)(d1,d2) which sum has to be equal to config.hidden_size and config.axial_pos_shape is set to list(n1s,n2s)(ns1,ns2) and which product has to be equal to config.max_embedding_size which during training has to be equal to the sequence length of the input_ids. ``` The axial pos shape is new in reformer model. The product of it values have to equal to the sequence length. You could change the sequence length or set the axial pos shape to the right values. In this case it could be an list of (32,64)<|||||>> From the docs > > ``` > In practice, the parameter config.axial_pos_embds_dim is set to list(d1,d2)(d1,d2) which sum has to be equal to config.hidden_size and config.axial_pos_shape is set to list(n1s,n2s)(ns1,ns2) and which product has to be equal to config.max_embedding_size which during training has to be equal to the sequence length of the input_ids. > ``` > > The axial pos shape is new in reformer model. > The product of it values have to equal to the sequence length. > You could change the sequence length or set the axial pos shape to the right values. In this case it could be an list of (32,64) When I try to change the axial position shape(assuming max sequence length to be 512), I get error ---> 12 model = ReformerForSequenceClassification.from_pretrained('google/reformer-enwik8', num_labels = 2, axial_pos_shape= (16,32)) /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 751 raise RuntimeError( 752 "Error(s) in loading state_dict for {}:\n\t{}".format( --> 753 model.__class__.__name__, "\n\t".join(error_msgs) 754 ) 755 ) RuntimeError: Error(s) in loading state_dict for ReformerForSequenceClassification: **size mismatch for reformer.embeddings.position_embeddings.weights.0: copying a param with shape torch.Size([128, 1, 256]) from checkpoint, the shape in current model is torch.Size([16, 1, 256]). size mismatch for reformer.embeddings.position_embeddings.weights.1: copying a param with shape torch.Size([1, 512, 768]) from checkpoint, the shape in current model is torch.Size([1, 32, 768]).** <|||||>> > From the docs > > ``` > > In practice, the parameter config.axial_pos_embds_dim is set to list(d1,d2)(d1,d2) which sum has to be equal to config.hidden_size and config.axial_pos_shape is set to list(n1s,n2s)(ns1,ns2) and which product has to be equal to config.max_embedding_size which during training has to be equal to the sequence length of the input_ids. > > ``` > > > > > > The axial pos shape is new in reformer model. > > The product of it values have to equal to the sequence length. > > You could change the sequence length or set the axial pos shape to the right values. In this case it could be an list of (32,64) > > When I try to change the axial position shape(assuming max sequence length to be 512), I get error > > ---> 12 model = ReformerForSequenceClassification.from_pretrained('google/reformer-enwik8', num_labels = 2, axial_pos_shape= (16,32)) > > /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) > 751 raise RuntimeError( > 752 "Error(s) in loading state_dict for {}:\n\t{}".format( > --> 753 model.**class**.**name**, "\n\t".join(error_msgs) > 754 ) > 755 ) > > RuntimeError: Error(s) in loading state_dict for ReformerForSequenceClassification: > **size mismatch for reformer.embeddings.position_embeddings.weights.0: copying a param with shape torch.Size([128, 1, 256]) from checkpoint, the shape in current model is torch.Size([16, 1, 256]). size mismatch for reformer.embeddings.position_embeddings.weights.1: copying a param with shape torch.Size([1, 512, 768]) from checkpoint, the shape in current model is torch.Size([1, 32, 768]).** @flozi00 any idea what may be wrong?<|||||>Sorry, I have pretty much to do at the moment. I will have a look on it, but in worst case it take time up to Sunday evening in MEZ Timezone. Maybe someone else can answer you earlier<|||||>> Sorry, I have pretty much to do at the moment. > I will have a look on it, but in worst case it take time up to Sunday evening in MEZ Timezone. > Maybe someone else can answer you earlier Thank you so much for the quick response! I appreciate it. I am just wondering how else could I get some one to throw light on this issue. <|||||>Just link someone from huggingface team to this as done with patrickvonplaten earlier. in my experience the team is very nice and helpful all the time. Maybe you should open an PR with your code and write [WIP] in front of it, so it gets better seen than an issue and you could get faster help by more people<|||||>> Just link someone from huggingface team to this as done with patrickvonplaten earlier. > in my experience the team is very nice and helpful all the time. > Maybe you should open an PR with your code and write [WIP] in front of it, so it gets better seen than an issue and you could get faster help by more people Makes sense, Thank you for the advice!<|||||>Hey, sorry. I still found no time to have a look on your colab notebook. Did you opened the pull request?<|||||>> Hey, sorry. > I still found no time to have a look on your colab notebook. > Did you opened the pull request? @flozi00 I created a pull request. https://github.com/huggingface/transformers/pull/5198 Please have a look at it and provide your feedback/suggestions.<|||||>Hey, just have seen that you already got feedback. I still didn't had time to run your code cause my calendar is very full until Friday, sorry<|||||>> Hey, just have seen that you already got feedback. > I still didn't had time to run your code cause my calendar is very full until Friday, sorry Hey, I got caught up with work, could not reply earlier! I received the feedback and I have implemented the initial feedback as well. Trying to implement the test case for classification head, it is taking time as I need to understand the underlying test framework and also the architecture of the test cases. Further, as the implementation of the classification head, did not had much review comment I decided to test the changes on the IMDB Dataset, but I have not been successful! I am getting CUDA error! link to the notebook; https://colab.research.google.com/drive/1KFsQxLqsMB6vBF4_bRmTFGhdGwkgx0zI#scrollTo=vOyStELCX8VA&uniqifier=2<|||||>https://discuss.pytorch.org/t/runtimeerror-cuda-error-cublas-status-alloc-failed-when-calling-cublascreate-handle/78545/6 There are similar issues here. It seems like there is something out of bounds / out of index range<|||||>> https://discuss.pytorch.org/t/runtimeerror-cuda-error-cublas-status-alloc-failed-when-calling-cublascreate-handle/78545/6 > > There are similar issues here. > It seems like there is something out of bounds / out of index range @flozi00 Though the issue was different, I was able to solve the issue, and finally, the classification head is working, Thanks Further, I am trying to play with the sequence length parameter but the model throws an error; posted a separate issue https://github.com/huggingface/transformers/issues/5320 for the same. Please let me know if you have an idea about this one. ps: I will update the notebook once I am done with the sequence length setting.<|||||>Any Idea how I can get sentence representation using Reformer Model? ( 1, 1024 ) shape using reformer-enwik8? Thanks!<|||||>you can just use the output of `ReformerModel` no? <|||||>@patrickvonplaten It's returning sentences generation output instead of vector?<|||||>@patrickvonplaten This post is a little longer, I appreciate your time and sorry for the long post! But I have been trying to make the classifier work and hence could help myself. I have implemented a classification model using the Plain Reformer model, Link to collab notebook. [https://colab.research.google.com/drive/1l61NccWTGMfNFPj1T8kvMjnik2iEQWfe?usp=sharing](url) I have used the pre-trained crime and punishment (CP) tokenizer sequence tokenization. But, I am not able to improve the accuracy of the model form **~50%**, Which is equal to random classification as it is a binary classifier. I tried to play around with the learning rate, batch size, epochs, and sequence length but it does not help. I have implemented a classifier using Roberta and that seems to work fine, giving me an accuracy of ~94%. [https://colab.research.google.com/drive/10vv8YgwJzbKDpd0Q-pXupP86b1pOZJg8?usp=sharing](url) So, I started comparing the difference between the two. - For Reformer, I am not able to use any existing pre-trained model for fine-tuning unlike Roberta. - The output of the tokenizer.tokenize(<sentence>) is also much different in both the cases. I mean in the case of Roberta the sentence get tokenized more or less in words, while in case of Reformer the sentence is mostly broken down in characters except for very common words like the, and, it ..etc `Roberta output: '<s>', 'ĠOne', 'Ġof', 'Ġthe', 'Ġother', 'Ġreviewers', 'Ġhas', Reformer output: '▁', '<', 's', '>', '▁', 'O', 'n', 'e', '▁of', '▁the', '▁o', 'ther', '▁re', 'v', 'i', 'e', 'w', 'er', ` So, I am wondering if the CP tokenizer still needs to be trained on larger data. I tried to use the pre-trained XL tokenizer as that is also a sentence piece tokenization but that stated giving me memory issues. Can, I use a different classifier, I read in one of the blogs that tokenizer is a by-product of the model training. Hence, it is tried to the pre-trained model. Are there any pre-trained weights available for Reformer to be used to fine-tune for classification. Like it is there for other models, If not; do we have anything planned for it or not? Thanks Amit <|||||>Hey @as-stevens, Yeah Reformer sadly still does not have a bi-directional pretrained model, so that it will be hard to fine-tune a Reformer Model for your task. The `enwik-8` model was trained using the CLM objective as a char-lm, which means that it does not really help at all for classification. Also the model was trained on chars and not tokens....over all I would recommend just using RoBERTa for now. I hope that the original authors will open-source pretrained weights at some point. We are also discussing internally whether it makes sense to pre-train Reformer ourselves.<|||||>@patrickvonplaten thank you for addressing all the queries! This helps. After playing with the different tokenizers(Xl_net, Bert and Roberta), I could sense that the Reformer tokenizer is a char level and not a word level. And after playing with the different hyper parameter and no change is model state, it looked like the model is training from scratch. Thanks for answering all the question. Any ETA for the pre-trained model?<|||||>@patrickvonplaten do you have any updates on this ? " We are also discussing internally whether it makes sense to pre-train Reformer ourselves." Thank you!<|||||>Hey @mihaidobri , Reformer has a `ReformerForMaskedLM` that can be used for pretraining :-)<|||||>@as-stevens, @patrickvonplaten I've improved the performance by classifying the average of token representations instead of classifying the < s > token. You only need to change this line of code > x = features[:, 0, :] # take < s > token (equiv. to [CLS]) in the ReformerClassificationHead class to this > x = torch.mean(features, dim=1) <|||||>@shramezani Would it be possible to share the notebook for reference. Also, what accuracy you achieved for the classification task?<|||||>This is a link to https://colab.research.google.com/drive/1kTMNT8-UprrWEb-Vpjji4Qbs8gURZiFs?usp=sharing I changed the representation and got an accuracy score of 0.69.<|||||>@shramezani the notebook link is not accessible, permission denied, plz check.
transformers
5,022
closed
Latest merge [Benchmark] Memory benchmark utils #4198 fails at Windows
# 🐛 Bug ## Information Model I am using: Bert ## To reproduce Steps to reproduce the behavior: 1. Windows 2. Install transformers from source ## Environment info python = 3.7.6 pytorch = 1.5 cuda = 10.2 I think the problem happens because in code: https://github.com/huggingface/transformers/blob/master/src/transformers/__init__.py `from .benchmark import PyTorchBenchmark, PyTorchBenchmarkArguments` and in code: https://github.com/huggingface/transformers/blob/master/src/transformers/benchmark/benchmark_utils.py there is `from signal import SIGKILL` > signal.SIGKILL > > Kill signal. > It cannot be caught, blocked, or ignored. > Availability: Unix. I think it will fail for Windows if you try `from signal import SIGKILL`. @patrickvonplaten https://github.com/huggingface/transformers/pull/4198
06-15-2020 17:09:51
06-15-2020 17:09:51
Thanks for noting that - you're 100% correct :-)
transformers
5,021
closed
Add position_ids in TFElectra models docstring
Just a small thing, but it was forgotten when adding those I guess.
06-15-2020 16:54:38
06-15-2020 16:54:38
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=h1) Report > Merging [#5021](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1affde2f10c653e36601dd7a3e6a2525ae7ced57&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5021/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5021 +/- ## ======================================= Coverage 77.26% 77.27% ======================================= Files 128 128 Lines 21847 21847 ======================================= + Hits 16880 16882 +2 + Misses 4967 4965 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `91.28% <ø> (ø)` | | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.20% <0.00%> (-0.24%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `34.07% <0.00%> (+5.41%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=footer). Last update [1affde2...7ffe4b3](https://codecov.io/gh/huggingface/transformers/pull/5021?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,020
closed
Run error when run PPLM in example!
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> ![image](https://user-images.githubusercontent.com/49581245/84684669-94247480-af6b-11ea-993b-a2eed85ac805.png) And after I run other model also report error like this. <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-15-2020 16:53:13
06-15-2020 16:53:13
transformers
5,019
closed
Not able to reproduce XNLI results from Google's mBERT weights.
# 🐛 Bug Hello, I am using mBERT from https://github.com/google-research/bert/blob/master/multilingual.md and using it with the run_xnli.py file. Language: Learning the model on English (en) and testing it on French (fr). The problem arises when I download the mBERT's weights from: https://github.com/google-research/bert/blob/master/multilingual.md and then use https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py to get the PyTorch compatible weights. I then used the PyTorch compatible weights with run_xnli provided here: https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_xnli.py. Using the language as fr and train_language as en. The model doesn't learn properly and doesn't provide performance above 0.40 on fr. Though if I use the weights directly provided by huggingface for bert-base-multilingual-cased (as mentioned here: https://huggingface.co/transformers/v2.3.0/examples.html). It works fine and the mBERT model is able to achieve SOTA performance. Would it be possible to understand as why this might be happening? The tasks I am working on is XNLI. Learning on English (en) and zero-shot testing on French (fr). Steps to reproduce the behavior: 1. Get the mBERT model from the link provided here: https://github.com/google-research/bert/blob/master/multilingual.md 2.Get the PyTorch compatible weights using: https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py 3. Copy the vocab.txt and config.json from TF weights. Add "model_type"="bert" to config. 4. run the example run_xnli.py provided here: https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_xnli.py 5. Default parameters can be used with model_path set to the PyTorch weights of mBERT. - `transformers` version: 2.11 - Python version: 3.7 - PyTorch version (GPU): 1.5.0 - Tensorflow version (GPU): 2.2.0 - Using GPU in script: Yes - Using parallel setup across 3 1080ti GPUs
06-15-2020 16:52:08
06-15-2020 16:52:08
Edit: For some reason the model: xlm-mlm-tlm-xnli15-1024 (https://huggingface.co/transformers/pretrained_models.html), also doesn't learn properly. The train accuracy never goes beyond 0.35. Fix for XLM is provided here: https://github.com/huggingface/transformers/pull/5035 The issue of mBERT (weights from Google) still not resolved.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,018
closed
bart-large-xsum config task_specific_params['summarization_params'] wrong
They are bart-large-cnn-params.
06-15-2020 16:38:07
06-15-2020 16:38:07
set to `{}`
transformers
5,017
closed
Add CRF layer after Transformer model
I've read a paper titled "Named Entity Recognition in Chinese Electronic Medical Records Using Transformer-CRF". It takes Transformer's output as CRF's input, as shown in the figure. Which function could I use to implement it? model.add() doesn't work. ![Screenshot_20200615_173206_cn wps moffice_eng](https://user-images.githubusercontent.com/64955334/84675939-7d782080-af5f-11ea-8d9d-9dc812b2229a.png)
06-15-2020 15:25:37
06-15-2020 15:25:37
You can directly feed the output of a transformer's hidden states into a CRF implementation like this https://github.com/s14t284/TorchCRF Hope it helps!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> I've read a paper titled "Named Entity Recognition in Chinese Electronic Medical Records Using Transformer-CRF". > It takes Transformer's output as CRF's input, as shown in the figure. > Which function could I use to implement it? model.add() doesn't work. > ![Screenshot_20200615_173206_cn wps moffice_eng](https://user-images.githubusercontent.com/64955334/84675939-7d782080-af5f-11ea-8d9d-9dc812b2229a.png) 小哥哥 实现了没<|||||>This repository have showed how to add a CRF layer on transformers to get a better performance on token classification task. https://github.com/shushanxingzhe/transformers_ner<|||||>I don't think that this implementation is good. First, it doesn't take into account the fact that WP get padding index (usually -100) which is not expected in torchcrf and also it go over all the tags, also the one you won't use like the not first WP of a token (token==space separated string)<|||||>Does anyone have a clean implementation of a BERTCRF? Preferably in a Jupyter notebook?
transformers
5,016
closed
Cached TF files cannot be loaded with `from_pretrained` without internet connection
# 🐛 Bug ## Information For TF models in latest version of `transformers v2.11.0`, when the internet is turned off, `from_pretrained` throws an error even when the model has previously been downloaded to cache: ```python # does not work (wifi/internet turned off) model = TFBertModel.from_pretrained('bert-base-uncased', local_files_only=True) ``` Error is: ``` OSError: Can't load weights for 'bert-base-uncased'. Make sure that: - 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bert-base-uncased' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. ``` The above error occurs regardless of how `local_files_only` is set. This can be problematic in certain settings (e.g., air-gapped networks, firewalls). By contrast, for PyTorch models with NO internet, `from_pretrained` works perfectly and correctly loads files from cache both when `local_files_only=False` and `local_files_only=True`, which is a useful parameter recently added in PR #2930 by @BramVanroy and merged by @LysandreJik: ```python # works and is fast (wifi/internet turned off) model = BertModel.from_pretrained('bert-base-uncased', local_files_only=True) # works but is slow due to outgoing connection attempts (wifi/internet turned off) model = BertModel.from_pretrained('bert-base-uncased', local_files_only=False) ``` ## Environment Info - `transformers` version: 2.11.0 - Platform: Linux-4.15.0-88-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.1.0 (True)
06-15-2020 15:21:15
06-15-2020 15:21:15
Hi! This issue should have been solved by https://github.com/huggingface/transformers/pull/5116<|||||>Hi @LysandreJik: I just wanted to let you know that this issue is still not resolved as of v3.02. If you ensure the model files (and vocab, etc.) exist in the cache, and then you turn off wifi/internet, model loading does not work in TensorFlow (but does work in PyTorch): This **does not** work: ```python # wifi off and model files exist in cache rom transformers import * model = TFAutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased') # error is: Cannot load weights for distilbert-base-uncased ``` This **does** work: ```python # wifi off and model files exist in cache from transformers import * model = AutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased') ``` <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Thanks, @LysandreJik . I can confirm that PR #6091 resolves this issue (tested by manually applying fix to a local `transformers==3.3.0` installation).
transformers
5,015
closed
Make DataCollator a callable
As discussed with @julien-c, change the `DataCollator` to be a callable (and the trainer now can take any function as data collator). I removed the abstract class and made a type alias to keep the type annotations instead.
06-15-2020 14:56:47
06-15-2020 14:56:47
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=h1) Report > Merging [#5015](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/66bcfbb130a3a60d873597fdca05a8d539ef3401&el=desc) will **increase** coverage by `0.06%`. > The diff coverage is `73.91%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5015/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5015 +/- ## ========================================== + Coverage 77.20% 77.27% +0.06% ========================================== Files 128 128 Lines 21854 21845 -9 ========================================== + Hits 16873 16880 +7 + Misses 4981 4965 -16 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.65% <70.00%> (+0.42%)` | :arrow_up: | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.43% <100.00%> (-0.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=footer). Last update [66bcfbb...88370ec](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>LGTM – we should however document this change in the next release notes, as it's breaking for end-users who would have implemented a custom data_collator
transformers
5,014
closed
Add bart-base
Also adds two `@slow` integration test for fill mask tasks. I made some guesses on the correct conversion, which will hopefully be confirmed by the authors in this [issue](https://github.com/pytorch/fairseq/issues/2242)
06-15-2020 14:42:57
06-15-2020 14:42:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=h1) Report > Merging [#5014](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/66bcfbb130a3a60d873597fdca05a8d539ef3401&el=desc) will **increase** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5014/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5014 +/- ## ========================================== + Coverage 77.20% 77.26% +0.05% ========================================== Files 128 128 Lines 21854 21854 ========================================== + Hits 16873 16885 +12 + Misses 4981 4969 -12 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.12% <ø> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.25% <0.00%> (-0.24%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=footer). Last update [66bcfbb...0dd162e](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>add a model card too if needed/useful<|||||>Will and model cards for this and bart-large at some point.
transformers
5,013
closed
[Modelcard] xlm-roberta-squadv2
06-15-2020 14:38:31
06-15-2020 14:38:31
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=h1) Report > Merging [#5013](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f7c93b3ceec341c3c794e9fc18939bc5d50b0fc2&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5013/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5013 +/- ## ======================================= Coverage 77.26% 77.26% ======================================= Files 128 128 Lines 21856 21856 ======================================= + Hits 16887 16888 +1 + Misses 4969 4968 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5013/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=footer). Last update [f7c93b3...ca606b8](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,012
closed
Raise errors that are not raised now
Raise errors that are not raised now.
06-15-2020 14:15:45
06-15-2020 14:15:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=h1) Report > Merging [#5012](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f7c93b3ceec341c3c794e9fc18939bc5d50b0fc2&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5012/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5012 +/- ## ======================================= Coverage 77.26% 77.26% ======================================= Files 128 128 Lines 21856 21856 ======================================= + Hits 16887 16888 +1 + Misses 4969 4968 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5012/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5012/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.31%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=footer). Last update [f7c93b3...d643ec8](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Pinging @VictorSanh here<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I'll close this.
transformers
5,011
closed
[Modelcard] bart-squadv2
06-15-2020 13:50:02
06-15-2020 13:50:02
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=h1) Report > Merging [#5011](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/66bcfbb130a3a60d873597fdca05a8d539ef3401&el=desc) will **increase** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5011/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5011 +/- ## ========================================== + Coverage 77.20% 77.26% +0.05% ========================================== Files 128 128 Lines 21854 21854 ========================================== + Hits 16873 16885 +12 + Misses 4981 4969 -12 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=footer). Last update [66bcfbb...19b3655](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi @flozi00 , this is great ! If possible can you link my original training colab so people can train their own models easily if needed. You can find it [here](https://colab.research.google.com/drive/1I5cK1M_0dLaf5xoewh6swcm5nAInfwHy?usp=sharing) Thanks !
transformers
5,010
closed
How can I print output attention on test data?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I fine-tuned using BART model to carry out Summarization task. I want to print output attention scores about test data, but I don't know which layer I should change "output_attention" be True. <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-15-2020 12:16:48
06-15-2020 12:16:48
You should do it on the forward function of the model your are using so probably: ```model = BartForConditionalGeneration.from_pretrained("bart-large") model(input_ids, output_attentions=True) ```
transformers
5,009
closed
Create README.md for finetuned BERT model
06-15-2020 12:05:12
06-15-2020 12:05:12
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=h1) Report > Merging [#5009](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ebab096e864a619717a497089d864d10e21bc536&el=desc) will **increase** coverage by `0.06%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5009/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5009 +/- ## ========================================== + Coverage 77.26% 77.33% +0.06% ========================================== Files 128 128 Lines 21854 21854 ========================================== + Hits 16886 16900 +14 + Misses 4968 4954 -14 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.25% <0.00%> (-0.24%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `33.43% <0.00%> (+4.77%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=footer). Last update [ebab096...97c4e50](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,008
closed
Add model card for StackOBERTflow-comments-small
06-15-2020 11:26:00
06-15-2020 11:26:00
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=h1) Report > Merging [#5008](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ebab096e864a619717a497089d864d10e21bc536&el=desc) will **increase** coverage by `0.39%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5008/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5008 +/- ## ========================================== + Coverage 77.26% 77.66% +0.39% ========================================== Files 128 128 Lines 21854 21854 ========================================== + Hits 16886 16973 +87 + Misses 4968 4881 -87 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=footer). Last update [ebab096...bd9e483](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,007
closed
Create README.md
06-15-2020 11:22:57
06-15-2020 11:22:57
transformers
5,006
closed
[Ignore] Added data collator for XLNet and added related calls
Added modified data collator for XLNet and added related calls in `examples/language-modeling/run_language_modeling.py`
06-15-2020 10:42:13
06-15-2020 10:42:13
transformers
5,005
closed
Increase pipeline support for ONNX export.
Supports all kind of models for all the pipelines we've except `summarization` because of `triu`not supported by ONNX.
06-15-2020 10:10:08
06-15-2020 10:10:08
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=h1) Report > Merging [#5005](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935&el=desc) will **increase** coverage by `0.37%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5005/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5005 +/- ## ========================================== + Coverage 76.89% 77.26% +0.37% ========================================== Files 128 128 Lines 21854 21854 ========================================== + Hits 16804 16886 +82 + Misses 5050 4968 -82 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <0.00%> (+19.75%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=footer). Last update [9931f81...9f4da89](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,004
closed
[🚀 Feature request] upload bart-base checkpoint
# 🚀 Feature request `bart-base` checkpoint is now available in fairseq repo https://github.com/pytorch/fairseq/tree/master/examples/bart#pre-trained-models so now it can be made available here `facebook/bart-base` @sshleifer
06-15-2020 09:57:58
06-15-2020 09:57:58
Thanks for letting me know @patil-suraj ! This is almost done, blocked on 1 minor [issue](https://github.com/pytorch/fairseq/issues/2242) If I don't get an answer there by Wednesday I will assume that the tokenizer is the same as bart.large and move forward.<|||||>Thanks @sshleifer for immediate response and PR :)
transformers
5,003
closed
Cannot import transformers due to issue with signal.py (SIGKILL)
# 🐛 Bug Can't import transformers, get a problem with signal.py. I see that an issue was recently fixed... should I install a nightly build, or...? >>> import transformers 2020-06-15 05:40:54.098560: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\15194\anaconda3\envs\ai_env\lib\site-packages\transformers\__init__.py", line 371, in <module> from .benchmark import PyTorchBenchmark, PyTorchBenchmarkArguments File "C:\Users\15194\anaconda3\envs\ai_env\lib\site-packages\transformers\benchmark\__init__.py", line 10, in <module> from .benchmark import PyTorchBenchmark File "C:\Users\15194\anaconda3\envs\ai_env\lib\site-packages\transformers\benchmark\benchmark.py", line 32, in <module> from .benchmark_utils import Benchmark, Memory, measure_peak_memory_cpu, start_memory_tracing, stop_memory_tracing File "C:\Users\15194\anaconda3\envs\ai_env\lib\site-packages\transformers\benchmark\benchmark_utils.py", line 19, in <module> from signal import SIGKILL **ImportError: cannot import name 'SIGKILL' from 'signal' (C:\Users\15194\anaconda3\envs\ai_env\lib\signal.py)** ## Expected behavior The package to import. ## Environment info - `transformers` version: 2.11 - Platform: Windows 10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.0, GPU = yes ( torch.cuda.is_available == True). - Tensorflow version (GPU?): 2.2.0; GPU = yes. - Using GPU in script?: No. - Using distributed or parallel set-up in script?: No.
06-15-2020 09:48:02
06-15-2020 09:48:02
Just had to reinstall. I installed pytorch AFTER transformers, and I cloned the transformers repository. Just uninstalled and reinstalled with pip install transformers and the issue went away.
transformers
5,002
closed
Illegal memory access (cudaErrorIllegalAddress)
# 🐛 Bug ## Information This bug/problem has been discussed on [Pytorch](https://github.com/pytorch/pytorch/issues/21819)/[Apex](https://github.com/NVIDIA/apex/issues/319) and [here](https://github.com/huggingface/transformers/issues/460) (bot marked it as stale) also. I'm using Albert on GLUE (although this issue is model/dataset agnostic). I've made a slight modifications in my train loop (as compared to `train()` in `Trainer()`. The main one which throws this error is when I compute the gradients: ``` grad = torch.autograd.grad(loss, model.parameters(), allow_unused=True) ``` where loss is simply `model(**inputs)[0]` I'm using Pytorch 1.5.0+cu101, transformers 2.11 on one GPU, no multiGPU, although the instance has 2 by (CUDA_VISIBLE_DEVICES=0). I tried with `torch.cuda.set_device()` also. Can you suggest a workaround ?
06-15-2020 08:57:41
06-15-2020 08:57:41
Reducing the batch size further doesn't raise this error. But a lot of RAM is left empty. If this is an issue, then RAM demand exceeded error should be raised.<|||||>I've seen this error, and think it happens right before `OutOfMemory`. I agree the traceback should be different. Marking wontfix for now since this is a torch/apex issue, as you suggest, not a transformers issue. <|||||>I don't think it's an Apex issue also because I ran my code without fp16 integration earlier. Mostly a pytorch issue. I am not sure how RAM usage exceeds in such a short time. Initially, 10 Gigs of RAM is left and suddenly this error pops up. Halving the batch size helped but there are no signs of memory leakage. Not really sure what's happening. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,001
closed
Segmentation fault when loading pretrained file
When loading model weights file using <model.from_pretrained>, segmentation fault error occur..
06-15-2020 08:52:56
06-15-2020 08:52:56
I am having the same problem - I had my code working about a week ago. Then, I installed the library (`pip install transformers`) on a new machine and now it crashes when I try to load any pre-trained model (e.g `BertModel.from_pretrained('bert-base-uncased')`). I tried downgrading to 2.9.1 and 2.10 and the problem persisted. PyTorch version: 1.4.0 GPU type: 'Tesla V100-SXM2-16GB'<|||||>This is the full debug log output: ``` DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443 DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /models.huggingface.co/bert/bert-base-uncased-config.json HTTP/1.1" 200 0 DEBUG:filelock:Attempting to acquire lock 140382636847512 on /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517.lock INFO:filelock:Lock 140382636847512 acquired on /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517.lock INFO:transformers.file_utils:https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json not found in cache or force_download set to True, downloading to /home/ec2-user/.cache/torch/transformers/tmpppid3hrz DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443 DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "GET /models.huggingface.co/bert/bert-base-uncased-config.json HTTP/1.1" 200 433 HBox(children=(FloatProgress(value=0.0, description='Downloading', max=433.0, style=ProgressStyle(description_… INFO:transformers.file_utils:storing https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json in cache at /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517 INFO:transformers.file_utils:creating metadata file for /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517 DEBUG:filelock:Attempting to release lock 140382636847512 on /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517.lock INFO:filelock:Lock 140382636847512 released on /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517.lock INFO:transformers.configuration_utils:loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json from cache at /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517 INFO:transformers.configuration_utils:Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 30522 } DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): cdn.huggingface.co:443 DEBUG:urllib3.connectionpool:https://cdn.huggingface.co:443 "HEAD /bert-base-uncased-pytorch_model.bin HTTP/1.1" 200 0 DEBUG:filelock:Attempting to acquire lock 140384545811816 on /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157.lock INFO:filelock:Lock 140384545811816 acquired on /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157.lock INFO:transformers.file_utils:https://cdn.huggingface.co/bert-base-uncased-pytorch_model.bin not found in cache or force_download set to True, downloading to /home/ec2-user/.cache/torch/transformers/tmp9qzw5qor DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): cdn.huggingface.co:443 DEBUG:urllib3.connectionpool:https://cdn.huggingface.co:443 "GET /bert-base-uncased-pytorch_model.bin HTTP/1.1" 200 440473133 HBox(children=(FloatProgress(value=0.0, description='Downloading', max=440473133.0, style=ProgressStyle(descri… INFO:transformers.file_utils:storing https://cdn.huggingface.co/bert-base-uncased-pytorch_model.bin in cache at /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157 INFO:transformers.file_utils:creating metadata file for /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157 DEBUG:filelock:Attempting to release lock 140384545811816 on /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157.lock INFO:filelock:Lock 140384545811816 released on /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157.lock ```<|||||>> I am having the same problem - I had my code working about a week ago. Then, I installed the library (`pip install transformers`) on a new machine and now it crashes when I try to load any pre-trained model (e.g `BertModel.from_pretrained('bert-base-uncased')`). > I tried downgrading to 2.9.1 and 2.10 and the problem persisted. > > PyTorch version: 1.4.0 > GPU type: 'Tesla V100-SXM2-16GB' Thanks for your reply, How did you make your code work before?<|||||>So, I just realized that today I was using a different version of PyTorch (I am working with Amazon's SageMaker and had to spin up a new instance). I upgraded my PyTorch to 10.5 and cudatools to 10.2 (`conda install pytorch torchvision cudatoolkit=10.2 -c pytorch`) and it just started working again..hope that helps<|||||>@akornilo There is something wrong with my pytorch,turning pytorch version to 1.5+cuda9.2 makes it works. Thx for your advice. <|||||>Glad you could resolve your issue! Feel free to reopen if you see the same issue down the road.<|||||>> I am having the same problem - I had my code working about a week ago. Then, I installed the library (`pip install transformers`) on a new machine and now it crashes when I try to load any pre-trained model (e.g `BertModel.from_pretrained('bert-base-uncased')`). > I tried downgrading to 2.9.1 and 2.10 and the problem persisted. > > PyTorch version: 1.4.0 > GPU type: 'Tesla V100-SXM2-16GB' Downgrade sentencepiece to 0.1.91. This worked for me after being stuck at the same problem as yours