repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
5,300
closed
T5ForConditionalGeneration fp16 nan loss
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: (Pdb) net_inputs["attention_mask"] tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0') (Pdb) net_inputs["lm_labels"] tensor([[ 3, 1489, 89, -100, -100, -100], [ 5441, 1511, 13, 3, 10314, 152]], device='cuda:0') (Pdb) net_inputs["lm_labels"] tensor([[ 3, 1489, 89, -100, -100, -100], [ 5441, 1511, 13, 3, 10314, 152]], device='cuda:0') (Pdb) 1. model = T5ForConditionalGeneration.from_pretrained("t5-large") 2. outputs = model(input_ids=net_inputs["input_ids"], attention_mask=net_inputs["attention_mask"], lm_labels=net_inputs["lm_labels"]) 3. loss, lm_logits = outputs[0], outputs[1]; print(loss) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Seems like a masking issue. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: V100 - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?):N/A - Using GPU in script?: yes - Using distributed or parallel set-up in script?: parallel
06-26-2020 05:55:59
06-26-2020 05:55:59
See #4586.
transformers
5,299
closed
No documentation for MMBT on official docs
I tried finding MMBT on the official documentation: https://huggingface.co/transformers/ but I could not find any references to it, even though there is an implementation for it in the source code.
06-26-2020 05:10:50
06-26-2020 05:10:50
I also cannot figure out how to use this model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,298
closed
The start and end position of BertForQuestionAnswering
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I am confused with the start and end position of Bert for QA model because of wordpiece. My question is: what is the value of position based on? For example, if start_position=10, it means the 10th word of the input or the 10th subword of input after wordpiece? Also, is the position value contains the length of question? <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-26-2020 04:20:59
06-26-2020 04:20:59
The position is based on the tokens. So this means that if you have a context: "The dog runs in the park." and a question "Who runs in the park?" and the corresponding tokens (=`tokenizer("Who runs in the park?", "The dog runs in the park.",).input_ids`) are `[101, 2040, 3216, 1999, 1996, 2380, 1029, 102, 1996, 3899, 3216, 1999, 1996, 2380, 1012, 102]`, then the model will tell you at what start and end position **of the input_ids** the answer to the question will be located. To better understand how to correctly use QA you might want to take a look at the example under this model in the docs: https://huggingface.co/transformers/master/model_doc/bert.html#bertforquestionanswering
transformers
5,297
closed
Can we have a way for a tokenizer to transform word level or character level annotations?
# 🚀 Feature request If I have a string with some annotations and I want to tokenize it I'd like the tokenizer to be able to transform the annotations as well. For example suppose I have `s = "therein lies the problem."` and I'm interested in the substring `"the problem"`. So I have a string `s` and I know that the substring I'm interested in is at index 13:24. But then I tokenize `s` so that I can put it into a huggingface model and get out `['there', '##in', 'lies', 'the', 'problem', '.']` and it doesn't match up with my annotation anymore. Could we add an annotation as an additional argument to the tokenizer.tokenize function so that I could get something like the following: ``` tokenizer.tokenize("therein lies the problem.", selection=(13,24)) > ['there', '##in', 'lies', 'the', 'problem', '.'], [0, 0, 0, 1, 1, 0] ``` Substrings of interest are almost always going to line up with token boundaries so it doesn't matter too much what happens to a token that is partially in and partially outside of the selected region. Is there a way of doing something like this now?
06-26-2020 01:09:53
06-26-2020 01:09:53
I just found out about the `return_offsets_mapping` functionality in the `PreTrainedTokenizerFast` tokenizers. I think I can use that functionality to solve my problem. <|||||>@brian8128 could you share a code example of how you do that?<|||||>Hey Avijit - I didn't test this because it's sort of copied from various places in my codebase but here's the general idea. There may be multiple different char level labels within one token so here I take the most common one using the scipy.stats.mode function. ``` from scipy.stats import mode from transformers import BertTokenizerFast texts = ... # list of strings char_level_labels = ... # list of 1-d numpy arrays corresponding in length to texts tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') batch = tokenizer.batch_encode_plus(texts, return_offsets_mapping=True) offset_mappings = batch['offset_mapping'] token_level_labels = [] for offset_mapping, char_level_label in zip(offset_mappings, char_level_labels): token_level_label = [] for so, eo in offset_mapping: # Huggingface adds a start and end token that don't correspond to any # chars, we'll label these tokens with -1 label = mode(char_level_label[so:eo]).mode[0] if eo - so > 0 else -1 token_level_label.append(label) token_level_labels.append(np.array(token_level_labels)) ```
transformers
5,296
closed
Update outdated TensorFlow -> PyTorch model transfer CLI example
The example is outdated and reports an error.
06-26-2020 00:02:05
06-26-2020 00:02:05
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=h1) Report > Merging [#5296](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7cc15bdd9675d1cec9186a8963c1f59be899ee68&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5296/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5296 +/- ## ========================================== - Coverage 79.29% 79.28% -0.01% ========================================== Files 138 138 Lines 24280 24280 ========================================== - Hits 19252 19251 -1 - Misses 5028 5029 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5296/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.74% <0.00%> (-0.20%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5296/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (ø)` | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=footer). Last update [7cc15bd...65cabe0](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Could someone review this PR?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,295
closed
Is summing of attention_mask intended?
# ❓ Questions & Help Is summing of `attention_mask` intended? ## Details Documents describe `attention_mask` as a mask to avoid performing attention on padding token indices. However from the [code](https://github.com/huggingface/transformers/blob/2ffef0d0c7a6cfa5a59c4b883849321f66c79d62/src/transformers/modeling_bert.py#L243) in `BertSelfAttention` the attention mask is added to the scores. ``` attention_scores = attention_scores + attention_mask ``` Is this intended? Does this fully mask out the context?
06-25-2020 22:01:33
06-25-2020 22:01:33
Yes, it's intended. The attention mask has values of 0 where its attending to the tokens, and has a value of `-10000` for tokens it's not attending. By summing this attention mask, it zeros out the attentions that should not be kept. See [here](https://github.com/huggingface/transformers/blob/2ffef0d0c7a6cfa5a59c4b883849321f66c79d62/src/transformers/modeling_utils.py#L228) for the implementation.<|||||>@LysandreJik I found that `attention_mask` is simply computed by `encoded_inputs["attention_mask"] = [1] * len(encoded_inputs["input_ids"])` in [tokenization_utils_base.py](https://github.com/huggingface/transformers/blob/2ffef0d0c7a6cfa5a59c4b883849321f66c79d62/src/transformers/tokenization_utils_base.py#L1944). Should `attention_mask` be synchronized with `encoded_inputs["input_ids"]` when a [MASK] appears in the input? For example, Input: This is [MASK] pen. Should the `attention_mask` be `1 1 0 1` rather than `1 1 1 1`?<|||||>The attention mask is computed according to the padding tokens, not masking tokens. See [here](https://github.com/huggingface/transformers/blob/2ffef0d0c7a6cfa5a59c4b883849321f66c79d62/src/transformers/tokenization_utils_base.py#L1932) for a sequence that requires padding.
transformers
5,294
closed
Slow Integration Test for examples/seq2seq/finetune.py
Train `sshleifer/student-xsum-12-3` for N batches on xsum data. Ensure that loss goes down. and is below ~2. This test should ONLY run on GPU and takes between 30s and 3 mins. python code to fetch xsum data: ``` import wget import tarfile wget.download('https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz') tarball = tarfile.open('xsum.tar.gz') tarball.extractall() data_dir='xsum' ``` cc @williamFalcon
06-25-2020 20:39:29
06-25-2020 20:39:29
ok cool<|||||>I will work on this. @sshleifer, is it supposed to be w/o a tokenizer? ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("sshleifer/student_xsum_12_3") ``` ``` OSError: Model name 'sshleifer/student_xsum_12_3' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_12_3' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. ``` It's broken online too (Compute button): https://huggingface.co/sshleifer/student_xsum_12_3?text=My+name+is+Thomas+and+my+main<|||||>### Tokenizer issue @stas00 No that's just me being lazy, fixed that model, let me know if you need others. separately: Can you write to S3? I just ran ```bash cp_bart_tok () { export ss=s3://models.huggingface.co/bert/sshleifer aw3 cp $ss/distilbart-xsum-1-1/merges.txt $1 aw3 cp $ss/distilbart-xsum-1-1/tokenizer_config.json $1 aw3 cp $ss/distilbart-xsum-1-1/vocab.json $1 } cp_bart_tok $ss/student_xsum_12_1/ ``` so easy for me to copy the bart tokenizer to other places. ### Status of this issue this issue is moved forward for **translation** with `examples/seq2seq/test_bash_script.py` It's run by `.github/self-scheduled.yml`. There are many possible axes for improvement: - testing summarization - may not need to set `do_predict=False` [here]: (https://github.com/huggingface/transformers/blob/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c/examples/seq2seq/test_bash_script.py#L77). I made a PL issue where do_predict was breaking in fp16, but then I turned off fp16 here. - we could use a larger model like `sshleifer/student_mbart_en_ro_1_1/` and actually learn something (and wait longer). - we could run add a new github workflow against torch 1.6 (add a new `.github/next_torch_version.yml`) - understand current [failures](https://github.com/huggingface/transformers/runs/910644181?check_suite_focus=true) Thanks so much for your help and let me know where/how I can support! <|||||>> @stas00 No that's just me being lazy, fixed that model, let me know if you need others. The problem is still there - you can quickly test [here](https://huggingface.co/sshleifer/student_xsum_12_3?text=My+name+is+Thomas+and+my+main) > separately: Can you write to S3? I don't think I can - at least nobody gave me permissions to do so. <|||||>> ### Status of this issue > this issue is moved forward for translation with examples/seq2seq/test_bash_script.py > [...] > Thanks so much for your help and let me know where/how I can support! Let me study it first and I will ask follow up questions once I did so. I'm learning this library and while I have spare resources I'd be happy to help fixing problems - feel free to delegate some issue to me, so I won't need to dig through past issues looking for what I could work on. I am not sure github issue comments is the most efficient way to collaborate - perhaps you're on skype/email/some IM? My email is [email protected].<|||||>Closing this. Will make new issues that are not started.
transformers
5,293
closed
run_squad.py :: ValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): English The problem arises when using: [*] the official example scripts: (give details below) [ ] my own modified scripts: (give details below) The tasks I am working on is: [*] an official GLUE/SQUaD task: (give the name) [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. while running run_squad.py 2. Training & testing with https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Error File "/usr/lib/python3.6/multiprocessing/pool.py", line 320, in <genexpr> return (item for chunk in result for item in chunk) File "/usr/lib/python3.6/multiprocessing/pool.py", line 735, in next raise value ValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers. ## Expected behavior To work ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-5.3.0-1017-aws-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
06-25-2020 20:19:38
06-25-2020 20:19:38
Hi! Do you mind pasting the command you use to run the script?<|||||>I'm facing the same error while training Electra and MiniLM on Squad: ValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers. This error is thrown up after reading of about 2% of the examples. My env is the same as above. The command I use is: python examples/question-answering/run_squad.py \ --model_type bert \ --model_name_or_path microsoft/MiniLM-L12-H384-uncased \ --do_train \ --do_eval \ --do_lower_case \ --train_file "/content/transformers/dev-v1.1.json" \ --predict_file "/content/transformers/dev-v1.1.json" \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir "/content/drive/My Drive/bert/newdir5" <|||||>Thanks, looking into it.<|||||>I works now!<|||||>Yes, it should be fixed in v3.0.0!
transformers
5,292
closed
Saving and loading tokenizers with torch.save fails
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Albert Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Load albert base tokenizer using `AutoTokenizer.from_pretrained` 2. Save it to a file using `torch.save` 3. Delete `~/.cache/torch/transformers` directory 4. Now try to load from the file using `torch.load` 5. Loading fails as the cached file does not exist ``` import transformers import torch token = transformers.AutoTokenizer.from_pretrained("albert-base-v2") torch.save({"token":token}, "./token.pt") ``` Delete `~/.cache/torch/` directory Then Run ``` import torch torch.load("./token.pt") ``` ## Expected behavior Tokenizer should load successfully. ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-4.19.104-microsoft-standard-x86_64-with-debian-bullseye-sid - Python version: 3.7.6 - PyTorch version (GPU?): 1.3.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
06-25-2020 20:08:39
06-25-2020 20:08:39
I met save issue before. Based on my experience, it is more related to PyTorch. See https://pytorch.org/tutorials/beginner/saving_loading_models.html, you should use state_dict to save and load model.<|||||>But @FacingBugs, this is an issue with the tokenizers and not the models <|||||>For tokenizer, using Transformer package provided API: ``` tokenizer.save_pretrained(your_output_model_dir) ```<|||||>@FacingBugs actually I have raised this bug because it was causing an issue in another library which uses this package https://github.com/flairNLP/flair/issues/1712 And since `torch.save` is mostly used to persist the models and dependencies for pytorch based learning, I believe the fix should be implemented in the transformers library itself rather than other dependent libraries which may add on top of transformers to provide their custom pytorch models in which case `torch.save` would mostly be used to save the models.<|||||>I think according to the PyTorch documentation, ```torch.save()``` is not recommended. I cite this from the documentation: "However in this case, the serialized data is bound to the specific classes and the exact directory structure used, so it can break in various ways when used in other projects, or after some serious refactors." For my personal experience with the Transformer package, the ```xx.save_pretrained()``` works for most of the cases (models, tokenizers, configs). For the tokenizer, I think the package actually saved several other files besides the vocab file. I think using the save_pretrained method should be the best practice. Hope this can help you.<|||||>Wouldn’t this make persisting other models on top of transformers difficult because now we have to save and track multiple files instead of a single file?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,291
closed
CircleCI stores cleaner output at test_outputs.txt
Changes: - Don't run pytest with `-v` or `--cov`. I left run_tests_torch_and_tf using `--cov` but would love to delete that. I have never used circleci to determine code coverage. - the run* jobs create artifact files called test_output.txt that are easier to read than scrolling in the circleci gui - self scheduled runner also attempts to make a test_output.txt Before: ![image](https://user-images.githubusercontent.com/6045025/85790787-d33a8e80-b6fe-11ea-9e10-d4c48dfe713f.png) After: [artifact](https://53278-155220641-gh.circle-artifacts.com/0/test_output.txt) is very manageable and [ui](https://app.circleci.com/pipelines/github/huggingface/transformers/8144/workflows/816b1f60-bda9-4456-9f51-92d6cff7b266/jobs/53278/steps) is less noisy.
06-25-2020 19:47:40
06-25-2020 19:47:40
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=h1) Report > Merging [#5291](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/79a82cc06aaa68088639bf9bb000752cfd33a8c6&el=desc) will **decrease** coverage by `1.39%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5291/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5291 +/- ## ========================================== - Coverage 79.29% 77.90% -1.40% ========================================== Files 138 138 Lines 24282 24282 ========================================== - Hits 19254 18916 -338 - Misses 5028 5366 +338 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <0.00%> (-0.24%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.48% <0.00%> (-0.15%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=footer). Last update [79a82cc...73e5da8](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I wonder how it looks when it fails. Is it the same output you would usually see? If that's the case, I'm all for that change!<|||||>Yes the tracebacks are completely unchanged. You just only see a `.` if the tests passes. And you don't have to scroll through `logger.info` (if you don't want to, it's still in the default circleci page).<|||||>Merging. Tag me if any issues.
transformers
5,290
closed
Model with fastest inference?
Just curious whether there is any benchmark for inference speed of models in the transformers library? I am interested in the question-answer task but I think a benchmark on any tasks that BERT can do should be good. Thank you.
06-25-2020 19:45:12
06-25-2020 19:45:12
@tqdo There is extensive info and a spreadsheet [here](https://huggingface.co/transformers/benchmarks.html?highlight=benchmark).<|||||>Thanks a lot
transformers
5,289
closed
[pipelines] Change summarization default to distilbart-cnn-12-6
- Also adds an integration test that runs on GPU if available. - Other pipelines could do the same if that would be helpful.
06-25-2020 19:31:12
06-25-2020 19:31:12
CI Failure is spurious.<|||||>Could you please tell me in **distilbart-cnn-12-6** what does **12 & 6** stands for?<|||||>12 Encoder layers and 6 decoder layers I would suggest<|||||>Thank you Sir, @patrickvonplaten
transformers
5,288
closed
Is there a Longformer For Sequence Classification?
I noticed that Longformer does not have a "LongformerForSequenceClassification". Is there a reason for this and is this something that would be added in the near future?
06-25-2020 19:29:17
06-25-2020 19:29:17
It is [there](https://github.com/huggingface/transformers/blob/24f46ea3f3e5006ca38735306753a846a0823174/src/transformers/modeling_longformer.py#L796). You may need to use a source install, I'm not sure it was already there in the last release.<|||||>Thanks. It was in the release but it just wasnt in the documentation.
transformers
5,287
closed
[tokenizers] Several small improvements and bug fixes
Various improvements for tokenizers: - Avoid recursion loop for special tokens id look-up in Fast tokenizers - Fix #5232 by removing the unsupported method `convert_tokens_to_string` for Fast tokenizers - Fix #5256 by aligning the behavior of the slow tokenizer on the behavior of the fast tokenizer for special tokens inside the input. A little bit of background on the modifications in Roberta tokenizer: We now align the behavior of the byte-level BPE tokenizer to the Fast version which is the most consistent with the way the original tokenizer behaved: all the special tokens are assumed to not have a prefix space so the user can control whether he wants to have a space or not in the string. We do an exception for the mask token in Roberta which is assumed to represent a word and thus has a prefix space by default (can be overided at initialization). This is necessary to be able to use Roberta in filled-mask completion easily. This is already built-in for the Fast tokenizer. Here I update the slow tokenizer to have this behavior using the newly introduced `AddedToken` which lets you control the space behaviors of the special tokens.
06-25-2020 19:14:49
06-25-2020 19:14:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=h1) Report > Merging [#5287](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/24f46ea3f3e5006ca38735306753a846a0823174&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `97.14%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5287/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5287 +/- ## ======================================= Coverage 79.08% 79.08% ======================================= Files 138 138 Lines 24078 24093 +15 ======================================= + Hits 19041 19054 +13 - Misses 5037 5039 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5287/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.15% <94.44%> (-0.01%)` | :arrow_down: | | [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5287/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.18% <100.00%> (+0.06%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5287/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <100.00%> (ø)` | | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5287/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <100.00%> (-0.09%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5287/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.16% <0.00%> (-0.32%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=footer). Last update [24f46ea...209dcc7](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,286
closed
save_pretrained on master results in tokenizers that cannot be loaded in v2.11
# 🐛 Bug ## Information Model I am using sshleifer/distilbart- * The problem arises when using: tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-xsum-12-3") -it fails here ## To reproduce Steps to reproduce the behavior: tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-xsum-12-3") -it fails here ![image](https://user-images.githubusercontent.com/40685761/85780041-de0c1800-b72c-11ea-840e-ed4402da3a01.png) ## Environment info - `transformers` version:2.8.0 - Platform: - Python version: 3.6 - PyTorch version (GPU?): https://download.pytorch.org/whl/cpu/torch-1.0.1.post2-cp37-cp37m-linux_x86_64.whl - Tensorflow version (GPU?): - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
06-25-2020 18:45:02
06-25-2020 18:45:02
Would it be possible to run ``` pip install transformers --upgrade ``` and try again? We have fixed a lot of bugs since 2.8.0 Pasted tracebacks are much easier to read than screenshots. <|||||>I belive that i was runing it. Let me try one more time. чт, 25 июня 2020 г., 21:50 Sam Shleifer <[email protected]>: > Would it be possible to run > > pip install transformers --upgrade > > and try again? We have fixed a lot of bugs since 2.8.0 > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/5286#issuecomment-649756239>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AJWNBQJVBSPKL52ZGPRPJ4LRYOL6XANCNFSM4OIUNXTQ> > . > <|||||>Just checked it twice. Looks like I've run it in another conda env. Here is an another error message(with transformers==2.11.0). ![image](https://user-images.githubusercontent.com/40685761/85783301-dc901f00-b72f-11ea-98ae-cbbdbf37695f.png) <|||||>Would you like me to create another issue?<|||||>I can reproduce now, thanks. Will fix.<|||||>Issue is that code on master saves `special_tokens_map.json` as ``` {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true}} ``` and `v2.11` cannot load this format (where `mask_token` is a dict). I deleted `special_tokens_mask.json`, which seems to fix things. (the original `facebook/bart-large-cnn/` doesn't have a `special_tokens_mask.json`). cc @thomwolf<|||||>I'm able to create tokenizer only for 'distilbart-xsum-12-1' and 'distilbart-xsum-9-6' (I still see 'special token mask_token... error for all other distilbart tokenizers') The model can be uploaded only with these tokenizers. Then on the summarization step, I'm getting the following error: ![image](https://user-images.githubusercontent.com/40685761/85829662-cddb5380-b793-11ea-9971-09f58fe517b3.png) Reproducible with both PyTorch versions: 1.5.1 and https://download.pytorch.org/whl/cpu/torch-1.0.1.post2-cp37-cp37m-linux_x86_64.whl<|||||>Could I see the command you ran + more traceback/like what the ids were? Or could you try to reproduce the issue in google colab? <|||||>1. **When I'm trying to create a tokenizer with the following command:** tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-cnn-12-6") it fails with: "special token {} has to be either str or AddedTokenFast but got: {}".format(key, type(value)) TypeError: special token mask_token has to be either str or AddedTokenFast but got: <class 'dict'> ----------------------------------------- 2. **And here is the code snippet for another error message:** tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-xsum-9-6") model = AutoModelWithLMHead.from_pretrained("sshleifer/distilbart-xsum-9-6") self.summarizer = pipeline("summarization", model=model, tokenizer=tokenizer) self.summarizer(text) # It fails here. The "text" variable contains the following(It was working with the simple text part from Wikipedia but fails with the following one): June 29, 2020 | Primary Care Collaborative July 22, 2020 | National Hispanic Medical Association July 29, 2020 | Business Health Coalition June 23, 2020 | The Hill June 25, 2020 June 24, 2020 | Primary Care Collaborative News Room Topic June 25, 2020 Primary care practices are projected to lose more than $65,000 in revenue per full-time physician in 2020, following drastic declines in office visits and fees for services from March to May during the COVID-19 pandemic, according to a... June 24, 2020 | Primary Care Collaborative In the wake of police brutality and pervasive racial injustice, which has spurred numerous, ongoing demonstrations across the country, the Primary Care Collaborative (PCC) reaffirms its commitment to racial equality. PCC underscores this... June 24, 2020 | Primary Care Collaborative On June 18, PCC joined many other leading organizations in the primary care community in an hour-long chat on Twitter about the current and future state of primary care during the coronavirus pandemic. If you missed the conversation, you... June 23, 2020 | The Hill Anthony Fauci, the nation's top infectious disease expert, said Tuesday that he thinks institutional racism has played a role in the disproportionate impact the coronavirus outbreak has had on the Black community in the U.S. "... June 20, 2020 WASHINGTON  —  Even as hospitals and physicians’ offices nationwide struggle to stay afloat amid the downturn caused by coronavirus, a small group of clinics is thriving, sustained by a model of care that many experts hope could reshape... June 18, 2020 | Primary Care Collaborative Check back weekly for the latest survey results and updates. For last week's data, see Week 13 Results. Who replied to the survey in Week 14? The Larry A. Green Center, the Primary Care Collaborative and 3rd Conversation are partnering... June 18, 2020 | PCPCC Press Release WASHINGTON (June 18, 2020) – The Larry A. Green Center, in collaboration with the Primary Care Collaborative (PCC) and 3rd Conversation, today released new data showing that more than 80 percent of primary care clinicians say professional... June 12, 2020 | The Commonwealth Fund On this episode of The Dose podcast, health policy expert Farzad Mostashari, M.D., who advises and supports hundreds of primary care practices across the country, explains what it will take to ensure doctors can continue caring for... June 12, 2020 | Primary Care Collaborative Six former leaders of the Centers for Medicare and Medicaid Services sent a joint letter June 10 to congressional leaders about the role of payment and regulatory flexibility in responding to the COVID-19 pandemic and addressing serious... June 12, 2020 | PR Newswire SAN FRANCISCO, June 12, 2020 -- Innovaccer, Inc., a leading healthcare technology company [and a PCC Executive Member] released its research-based report, titled "What COVID-19 Means to American Healthcare: Trends, Impacts, Predictions,... June 10, 2020 | Primary Care Collaborative Check back weekly for the latest survey results and updates. For last week's data, see Week 12 Results. Who replied to the survey in Week 13? A primary care clinician survey (weekly) and a patient survey (generally every other week) are... June 10, 2020 | PCPCC Press Release WASHINGTON (June 10, 2020) – The Larry A. Green Center, in collaboration with the Primary Care Collaborative (PCC) and 3rd Conversation, today released new data showing that a staggering 86 percent of Americans believe racism is impacting... June 4, 2020 | PCPCC Press Release WASHINGTON (June 4, 2020) – New survey data released today by the Larry A. Green Center, in collaboration with the Primary Care Collaborative (PCC) and 3rd Conversation, shows that over 70% of primary care patients are comfortable using... June 3, 2020 | Primary Care Collaborative Check back weekly for the latest survey results and updates. For last week's data, see Week 11 Results. Who replied to the survey in Week 12? A primary care clinician survey (weekly), and a patient survey (generally every other week) are... June 1, 2020 | The Hill The COVID-19 pandemic has unmasked many weaknesses in our public health and health care systems. But the outbreak also has accelerated, within weeks, useful health care innovations that would have normally taken years to develop. A strong... June 1, 2020 The week of June 1 is a time of national advocacy for primary care. The PCC and many other organizations are part of this campaign, called #saveprimarycare. We are reaching out to Congress and the administration to call for dedicated... May 27, 2020 | Primary Care Collaborative Check back weekly for the latest survey results and updates. For last week's data, see Week 10 Results. Who replied to the surveys? The Larry A. Green Center is now fielding two separate surveys: one to primary care clinicians, and a... May 27, 2020 WASHINGTON (May 27, 2020) – In new data released today by the Larry A. Green Center, in collaboration with 3rd Conversation and the Primary Care Collaborative (PCC), Americans report feeling “panicked, upset, or heartbroken” at the... May 21, 2020 WASHINGTON, May 21, 2020—In a new survey of primary care clinicians and their response to the COVID-19 pandemic, conducted May 15-18, more than half (55%) fear they are unprepared for the next wave of the pandemic due to high stress among... May 21, 2020 | Primary Care Collaborative Check back weekly for the latest survey results and updates. For last week's data, see Week 9 Results. Who replied to the survey in Week 10? The week 10 sample was much smaller (736) than last week’s sample and of relatively different... Pages <|||||>Any updates? <|||||>Looks like I've found an issue with: "special token {} has to be either str or AddedTokenFast but got: {}".format(key, type(value)) TypeError: special token mask_token has to be either str or AddedTokenFast but got: <class 'dict'> The issue is fixed. The problem was in my local cache so now it works. But it still fails for the summarization using the text above.
transformers
5,285
closed
Roberta's Positional Embedding Offset
https://github.com/huggingface/transformers/blob/d4c2cb402d6674211726fd5f4803d1090664e438/src/transformers/modeling_bart.py#L754 https://github.com/huggingface/transformers/blob/d4c2cb402d6674211726fd5f4803d1090664e438/src/transformers/modeling_bart.py#L763 So this offset is added because the function `create_position_ids_from_input_ids` shifts the position ids by padding_idx + 1. However, I wonder if other models should also include this? https://github.com/huggingface/transformers/blob/d4c2cb402d6674211726fd5f4803d1090664e438/src/transformers/modeling_roberta.py#L54 For instance, when I am using `Longformer`, it looks like the offset is not added to `Roberta`, so I need to add such a offset to config.max_position_embeddings
06-25-2020 18:27:36
06-25-2020 18:27:36
That's certainly possible. As you can see from my comment, and PR #5188 , I don't fully understand the motivation for the offset. It is very tricky.<|||||>I figured out why. See here https://github.com/pytorch/fairseq/issues/1177 So basically the purpose is to make positional embedding = 0 on padding positions (positions where token is padding token), using the `padding_idx` parameter in torch.nn.Embedding. I think we can simply use masked_fill() to make positional embedding = 0 on padding positions, so the code is easier to understand (no need for the offset).<|||||>Exactly! Would love to do that, but the migration of the existing bart state dicts is non trivial, since they already store the extra position embedding. Even if we tracked down all bart models with `config.static_position_embeddings=False` and resized their positional embeddings, we would break code that is not up to date w master (lots of code). So I think we must settle for documenting what is going on better in `LearnedPositionalEmbedding` and accept the unfortunate reality that we are stuck with the offset forever (or until we have some futuristic model hub tooling to version state dicts).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,284
closed
Tokenizer batch_encode_plus unexpected behavior
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): bert-base-multilingual-cased Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) A script to tokenize sentences into inputs for the model The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Standard text that is used for intent and domain classification ## To reproduce Steps to reproduce the behavior: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-multilingual-cased") tokenizer.batch_encode_plus(["hello my name is Sam"], return_tensors="pt", pad_to_max_length=True, add_special_tokens=True)['input_ids'] >>>tensor([[ 101, 61694, 10133, 15127, 11324, 10124, 14268, 102]]) tokenizer.batch_encode_plus(["hello my name is Sam"], return_tensors="pt", pad_to_max_length=True, add_special_tokens=False)['input_ids'] >>>tensor([[61694, 10133, 15127, 11324, 10124, 14268, 0, 0]]) tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") tokenizer.batch_encode_plus(["hello my name is Sam"], return_tensors="pt", pad_to_max_length=True, add_special_tokens=True)['input_ids'] >>>tensor([[ 0, 33600, 31, 759, 9351, 83, 3362, 2]]) tokenizer.batch_encode_plus(["hello my name is Sam"], return_tensors="pt", pad_to_max_length=True, add_special_tokens=False)['input_ids'] >>>tensor([[33600, 31, 759, 9351, 83, 3362, 1, 1]]) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior When running batch_encode_plus for a single example without adding special tokens and pad_to_max_length set to True, I would not expect any pad tokens. I did not see any mention in the documentation as to why this behavior is the expected norm. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: MacOS - Python version: 3.7.3 - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
06-25-2020 18:04:32
06-25-2020 18:04:32
Yes, this was fixed on master today with https://github.com/huggingface/transformers/pull/5252
transformers
5,283
closed
Gpt2 model card
06-25-2020 18:03:33
06-25-2020 18:03:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=h1) Report > Merging [#5283](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e1fce3c0129d05b65a83cdd89e8eadded553f2e&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5283/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5283 +/- ## ========================================== - Coverage 79.11% 79.09% -0.02% ========================================== Files 138 138 Lines 24080 24080 ========================================== - Hits 19050 19046 -4 - Misses 5030 5034 +4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (-0.15%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=footer). Last update [0e1fce3...a1c3ed8](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>(@julien-c approved offline)
transformers
5,282
closed
Bart: Instatiate lm_head once without wasting memory
Marking this down since I agreed to do it https://github.com/huggingface/transformers/pull/4803#discussion_r443381438 cc @patrickvonplaten
06-25-2020 15:47:33
06-25-2020 15:47:33
this will happen as part of TPU issue.<|||||>Can you link the issue? Is there an open PR for this already? <|||||>This one: https://github.com/huggingface/transformers/pull/5960 , but it's broken afaict.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,281
closed
Segmentation fault when trying to load models
We are using `Azure ML` pipelines to train our `transformers` models. We have had it working for a few weeks, and then recently (just noticed it a few days ago), when trying to initialize a model, we are getting `Segmentation fault`. I tried just loading the models locally this morning and have the same issues. See snippet below. ``` config = config_class.from_pretrained(model_name, num_labels=10) tokenizer = tokenizer_class.from_pretrained(model_name, do_lower_case=False) model = model_class.from_pretrained("distilroberta-base", from_tf=False, config=config) ``` I also tried to download the `*_model.bin` and pass a *local path* instead of the model *name* and also got a `Segmentation fault`. I also tried to use `bert-base-uncased` instead of `distilroberta-base` and had the same issue. I am running on Ubuntu, with the following package versions: ``` torch==1.3.0 tokenizers=0.0.11 transformers==2.4.1 ``` **UPDATE**: I hacked some example scripts and had success, so I *think* the issue is that our code uses... ``` "roberta": (RobertaConfig, RobertaForTokenClassification, RobertaTokenizer), "mroberta": (RobertaConfig, RobertaForMultiLabelTokenClassification, RobertaTokenizer), # our custom multilabel class ``` instead of what the example scripts use... ``` AutoConfig, AutoModelForTokenClassification, AutoTokenizer, ``` Was there a breaking change to model files recently that would mean that our use of the "non-auto" classes are no longer usable? **UPDATE 2**: Our original code does *not* cause a `Segmentation fault` on Windows.
06-25-2020 15:47:18
06-25-2020 15:47:18
Bumping to `torch==1.5.1` fixes this issue. But it's still unclear why.<|||||>I have also met the same issue and upgrading to torch1.5.1 also solves my problem.<|||||>Possibly related to https://github.com/huggingface/transformers/issues/4857<|||||>**Downgrade to sentencepiece==0.1.91 solve it.** I am using PyTorch 1.2.0 + transformers3.0.0<|||||>> **Downgrade to sentencepiece==0.1.91 solve it.** > I am using PyTorch 1.2.0 + transformers3.0.0 Also PyTorch 1.4.0 + transformers 3.0.2<|||||>Closing this as solved by #5418. Feel free to re-open if you still face an issue.<|||||>For me either adding `sentencepiece==0.1.91 + torch==1.3.1 + transformers==2.4.1` or `torch==1.5.1 + transformers==2.4.1` worked.<|||||>I come across the same problem too. My solution is just to import torch before import the transformers<|||||>> I come across the same problem too. My solution is just to import torch before import the transformers I followed your solution and it worked🤣. Before doing in this way, I downloaded almost every version of sentencepiece from 0.1.91 to 0.1.97. Although I do not know why, but it's something to happy about.
transformers
5,280
closed
Remove links for all docs
Now that someone has done an awesome version selector for the docs, there's no need to list all versions in the README.
06-25-2020 15:05:44
06-25-2020 15:05:44
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=h1) Report > Merging [#5280](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e1fce3c0129d05b65a83cdd89e8eadded553f2e&el=desc) will **decrease** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5280/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5280 +/- ## ========================================== - Coverage 79.11% 79.08% -0.03% ========================================== Files 138 138 Lines 24080 24080 ========================================== - Hits 19050 19043 -7 - Misses 5030 5037 +7 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5280/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5280/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5280/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (-0.15%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=footer). Last update [0e1fce3...49327ba](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,279
closed
Add DPR model
# Dense Passage Retrieval ## Intro The Dense Passage Retrieval (DPR) model from facebook ([github](https://github.com/facebookresearch/DPR), [arxiv](https://arxiv.org/abs/2004.04906)). It is used to do Open Domain Question Answering by extracting answer spans from a set of documents. This model actually comes in three different parts: - a context encoder - a question encoder - a reader for span prediction I did a schema to show the roles and the pipeline that one could build with those parts. The components in RED are the one in transformers. You can use whatever you want for the retrieval part, but there will be a new [retrieval feature](https://github.com/huggingface/nlp/pull/298) in the 🤗nlp library soon that will make the use of models like DPR easier. <img src="https://user-images.githubusercontent.com/42851186/85740169-b05da980-b701-11ea-975d-fbff4f368a5e.png" height="300"> ## Implementation - All three components share an encoding part with bert so I factorized this into the class `DprBertEncoder `. - The reader has a `.generate` method that finds the best spans and return them, while the `.forward` method only returns the logits. - In the config I allow to specify to load the weights from [files provided in the official repo](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py). I've already added one pretrained weight file per component in S3. ## Things I'd like to improve: - I think we can probably improve the tokenization step. Right now the reader inputs are currently two sets of input_ids. One for the question with text_pair=context_title, and one for the context_text (i.e. the content in which we are looking for answer spans). This is because they all need to be combined like ``` [CLS] <question_input_ids> [SEP] <context_title_input_ids> [SEP] <context_text_input_ids> ``` I was thinking of making a custom tokenizer just for the reader, let me know if it sounds reasonable. ## Example of usage (outdated) Provided we have a retrieval module (here named`wiki`) we can do: ```python tokenizer = DprTokenizer.from_pretrained('dpr-model-base') ctx_encoder = DprContextEncoder.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base') question_encoder = DprQuestionEncoder.from_pretrained('facebook/dpr-question_encoder-single-nq-base') reader = DprReader.from_pretrained('facebook/dpr-reader-single-nq-base') # First step: retrieve the best wikipedia passages question = 'Who created the Pokemon games ?' question_emb = question_encoder(tokenizer.encode(question, return_tensors="pt")).numpy() scores, passages = wiki.get_nearest_examples("embeddings", question_emb, k=10) # Second step: Feed the reader with the question and the retrieved snippets encoded_question_and_titles = [ tokenizer.encode(question, text_pair=passage["title"], return_tensors="pt") for passage in passages] encoded_texts = [ tokenizer.encode(passage["text"], return_tensors="pt", add_special_tokens=False) for passage in passages] predicted_spans = reader.generate(encoded_question_and_titles, encoded_texts) # Last step: print the result best_span = predicted_spans[0] best_span_ids = encoded_texts[best_span.doc_id].numpy().flatten() best_span_ids = best_span_ids[best_span.start_index:best_span.end_index + 1] print(tokenizer.decode(best_span_ids)) # >>> satoshi tajiri ``` ----------------- I'd be very happy to have some feedbacks on this one, as it is my first contribution to the library :D
06-25-2020 15:05:17
06-25-2020 15:05:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=h1) Report > Merging [#5279](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7c41057d5090f5e665f2404878369ecb13939def&el=desc) will **decrease** coverage by `0.80%`. > The diff coverage is `37.29%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5279/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5279 +/- ## ========================================== - Coverage 78.34% 77.54% -0.81% ========================================== Files 138 141 +3 Lines 23841 24085 +244 ========================================== - Hits 18679 18676 -3 - Misses 5162 5409 +247 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `28.29% <28.29%> (ø)` | | | [src/transformers/configuration\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rwci5weQ==) | `62.50% <62.50%> (ø)` | | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.23% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (-0.92%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.12% <0.00%> (-0.89%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.50% <0.00%> (-0.32%)` | :arrow_down: | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=footer). Last update [7c41057...7a90958](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Looks very cool! I would have three general things: 1) For all parameter names I would try to stick to the ones we already have `config` instead of `cfg`, `pad_token_id` instead of `pad_id`. For me `Bert` is always the gold standard and I would try to name everything as it is named in Bert. All these `hidden_states`, `output_embeddings`, ... 2) I don't like single letter or very short variable names, it makes it very hard to understand it and impossible sometimes to run select and replace commands in such files. I do not mind having long parameter names at all. But also not sure what other think here @LysandreJik 3) I would always favor composition over inheritance (there was only one case which I would have changed) 4) IMO, we should not introduce new class methods and in general new design choices that the user doesn't know from other models. The class method `init_encoder` is not needed here IMO. Also, all these `make it super easy for the user ` methods are not always good if it makes the model less easy to understand. Would always try to keep models as "thin" as possible and not add any "magic" methods if not needed<|||||>What do you think of having a special tokenizer for the `DPRReader` ? I find the current way to tokenize the inputs a bit weird. I can have a custom `DPRReaderTokenizer` with a new `__call__` method like ```python def __call__(self, question: str, titles: List[str], texts: List[str], ...): ```<|||||>Ok I think this one is ready to merge. Could you do a final pass to make sure everything is ok @LysandreJik @thomwolf ? I'll do another PR about the tokenization stuff.<|||||>I did some changes @LysandreJik : - have 1:1 correspondances betwen model + tokenizers - change `base_model_prefix` to be the attribute of the model the classes are wrapping - remove the wrong `model_input_names ` Not sure why the CI doesn't pass. It seems related to `test_tokenization_bert_japanese ` though :/<|||||>I changed the tokenizers config names to match the pretrained model names. About the `.generate` method that could be in the tokenizer I totally agree. But as it is linked to the way I'm going to change the reader's tokenizer, I will do the change at the same time in the next PR if it's good for you. Is there anything else that needs to be improved ?<|||||>Feel free to do another pass on the PR @thomwolf @LysandreJik to make sure that all the model+tokenizers aspect are all good now :) There is still room for some improvements but I keep them for the next PR: - have a custom __call__ for the tokenizer of the reader - move the deocing stuff of `.generate` to the tokenizer or the reader<|||||>Thanks for your comments @LysandreJik :) If there is a 3.0.1 that's going to be shipped I'd rather have everything in this PR then as there will be big breaking changes<|||||>Ok @LysandreJik I added the tests for the models and the tokenizers :) I couldn't use all the tests of `ModelTesterMixin` as some of them don't apply here, but I used those that are relevant.<|||||>I merged your PR and updated the docs @LysandreJik Thanks for your help ;)<|||||>Very cool! Let's merge, thanks for iteration @lhoestq :)<|||||>Is there a short working example on the full end-to-end retrieval to reader pipeline?<|||||>If I may add to @weilin37 comment, fine tuning included
transformers
5,278
closed
[dbart] push picture
06-25-2020 14:51:41
06-25-2020 14:51:41
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=h1) Report > Merging [#5278](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/24f46ea3f3e5006ca38735306753a846a0823174&el=desc) will **increase** coverage by `0.38%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5278/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5278 +/- ## ========================================== + Coverage 79.08% 79.46% +0.38% ========================================== Files 138 138 Lines 24078 24080 +2 ========================================== + Hits 19041 19135 +94 + Misses 5037 4945 -92 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.82% <0.00%> (-0.35%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.76% <0.00%> (+0.41%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (+0.91%)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `92.78% <0.00%> (+1.30%)` | :arrow_up: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=footer). Last update [24f46ea...6f28be5](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This looks obnoxiously large in the diff viewer. Should we do some fancy markdown instead of ```markdown ![DBART](https://github.com/sshleifer/transformers_fork/raw/add-distilbart-pic/examples/seq2seq/distilbart_w_logos.png) ``` <|||||>The README view is fixed-width on GitHub so you should be fine. However, I would try to find a more permanent host/URL for your image, I suspect your fork's branch will get deleted at some point.<|||||>#5394
transformers
5,277
closed
can't open file 'transformers-cli'
when running ```!python transformers-cli convert --model_type xlnet``` I'm getting the following error : ```python: can't open file 'transformers-cli': [Errno 2] No such file or directory``` Any possible fix to this problem? thank you
06-25-2020 14:51:12
06-25-2020 14:51:12
transformers
5,276
closed
Bert base model card
06-25-2020 14:46:42
06-25-2020 14:46:42
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=h1) Report > Merging [#5276](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e1fce3c0129d05b65a83cdd89e8eadded553f2e&el=desc) will **decrease** coverage by `1.46%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5276/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5276 +/- ## ========================================== - Coverage 79.11% 77.64% -1.47% ========================================== Files 138 138 Lines 24080 24080 ========================================== - Hits 19050 18698 -352 - Misses 5030 5382 +352 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.17% <0.00%> (-81.14%)` | :arrow_down: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `19.92% <0.00%> (-75.00%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `66.72% <0.00%> (-9.69%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `63.95% <0.00%> (-6.98%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.58% <0.00%> (-2.57%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `95.97% <0.00%> (-1.73%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `94.92% <0.00%> (-1.45%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.20% <0.00%> (-1.42%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=footer). Last update [0e1fce3...de13069](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,275
closed
Description of how to preprocess text corpus for roBERTa LM training
# 🚀 Feature request There are different locations where you desribe how to train roBERTa models. For example here: - https://huggingface.co/blog/how-to-train - https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb - https://gist.github.com/aditya-malte/2d4f896f471be9c38eb4d723a710768b But nobody sais how to preprocess the text corpus. I think it must be one sentence per row. But does it need empty lines between documents? Is it ok to shuffle the text line by line? Could you please claify this?
06-25-2020 14:34:16
06-25-2020 14:34:16
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Well - this issue is still open IMO...<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Maybe better to ask to the original RoBERTa authors or on the forum at https://discuss.huggingface.co? We are trying to keep the issues for bug/features reports now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,274
closed
[examples/seq2seq] more README improvements
06-25-2020 14:11:50
06-25-2020 14:11:50
transformers
5,273
closed
I have some problems with the "bert-large-uncased" model
This is the wrong code segment: from transformers import BertTokenizer, BertModel, BertConfig config = BertConfig.from_pretrained(args.bert_model, output_hidden_states = True, output_attentions = True) tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=True, output_hidden_states=True) bert = BertModel.from_pretrained(args.bert_model, config = config) # args.bert_model is 'bert-large-uncased' And below is the bug report: Traceback (most recent call last): File "main.py", line 119, in <module> main(args) File "main.py", line 64, in main net = BertProber(rel_vec_representation, args) File "/home/jzhao/program/interpret_bert/re/model.py", line 62, in __init__ self.bert = BertModel.from_pretrained(args.bert_model, config = config) File "/home/jzhao/anaconda3/envs/python36/lib/python3.6/site-packages/transformers/modeling_utils.py", line 466, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/home/jzhao/anaconda3/envs/python36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 615, in __init__ self.embeddings = BertEmbeddings(config) File "/home/jzhao/anaconda3/envs/python36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 149, in __init__ self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=0) File "/home/jzhao/anaconda3/envs/python36/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 97, in __init__ self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim)) TypeError: new() received an invalid combination of arguments - got (str, int), but expected one of: * (torch.device device) * (torch.Storage storage) * (Tensor other) * (tuple of ints size, torch.device device) didn't match because some of the arguments have invalid types: (str, int) * (object data, torch.device device) didn't match because some of the arguments have invalid types: (str, int)
06-25-2020 14:10:34
06-25-2020 14:10:34
Hi, I tried reproducing with the following code, but couldn't get it to crash: ```py from transformers import BertTokenizer, BertModel, BertConfig class Args: bert_model = "bert-large-uncased" args = Args() config = BertConfig.from_pretrained(args.bert_model, output_hidden_states = True, output_attentions = True) tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=True, output_hidden_states=True) bert = BertModel.from_pretrained(args.bert_model, config = config) # args.bert_model is 'bert-large-uncased' ``` Can you specify your environment? Are you sure your `args.bert_model` is `bert-large-uncased`?<|||||>I made a stupid mistake. The bert model is used in two places in my code, and the second place is ` config = BertConfig(args.bert_model, output_hidden_states = True, output_attentions = True) bert = BertModel.from_pretrained(args.bert_model, config = config) ` where I forgot to use the from_pretrained function for config, the right code should be ` config = BertConfig.from_pretrained(args.bert_model, output_hidden_states = True, output_attentions = True) bert = BertModel.from_pretrained(args.bert_model, config = config) `
transformers
5,272
closed
A question about the test accuracy of BERT-based-uncased model on the MNLI dataset
# 🐛 Bug I am curious about how accuracy you guys can reach by using the BERT-based-uncased model on the MNLI task? I got 61% test accuracy on the MNLI-matched dataset and 62% accuracy on the MNLI-unmatched dataset. ## Information Model I am using Bert-based-uncased: Language I am using the model on English: The problem arises when using: * I use the exactly the same official example scripts: (give details below) https://github.com/huggingface/transformers#quick-tour https://github.com/huggingface/transformers#run_gluepy-fine-tuning-on-glue-tasks-for-sequence-classification The tasks I am working on is: * An official GLUE task: (MNLI) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: v2.11.0 - Platform: Linux - Python version: 3.7 - PyTorch version (GPU?): 1.5.0 - Using GPU in script?: 2 Tesla V100 GPUs - Using distributed or parallel set-up in script?: No
06-25-2020 12:49:35
06-25-2020 12:49:35
Same, I can't replicate the results in the README file.<|||||>> Same, I can't replicate the results in the README file. I also try to use BERT-large-uncased model and get 0.6209 eval_mnli/acc and 0.6281 eval_mnli-mm/acc.<|||||>FYI, I can get 84.25 eval_mnli/acc by using a non-zero number of warmup steps. . Specifically, adding these lines in `trainer.py`: ``` warmup_steps = float(num_training_steps)*0.1 scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=warmup_steps, num_training_steps=num_training_steps) ```<|||||>> FYI, I can get 84.25 eval_mnli/acc by using a non-zero number of warmup steps. . > Specifically, adding these lines in `trainer.py`: > > ``` > warmup_steps = float(num_training_steps)*0.1 > scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=warmup_steps, num_training_steps=num_training_steps) > ``` Many thanks for your sharing. Cheers!<|||||>> FYI, I can get 84.25 eval_mnli/acc by using a non-zero number of warmup steps. . > Specifically, adding these lines in `trainer.py`: > > ``` > warmup_steps = float(num_training_steps)*0.1 > scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=warmup_steps, num_training_steps=num_training_steps) > ``` Hi, Chaitanya. I try to fix the code, but I got an error when running the new program. Here is a double check that does the path of the ``trainer.py`` is ``src/transformers/trainer.py``? And the lines you added are beginning from the line 312 in the ``trainer.py``? Does the original code is ``scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=self.args.warmup_steps, num_training_steps=num_training_steps )``? Many Thanks. ![微信截图_20200704002130](https://user-images.githubusercontent.com/23516191/86468851-5fe2ed80-bd8c-11ea-9ae7-5fdf26b77919.png) <|||||>Adding a non-zero number of warmup steps in `src/transformers/trainer.py` is the only change I made. This error seems to occur because you have an output label that is outside the range [0, n_classes) while computing the loss. Did you make any other changes to the trainer/dataset? And are you training with the original MNLI dataset?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,271
closed
BART finetune.py: model not learning anything
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [x] the official example scripts: (give details below) ./examples/summarization/finetune.sh * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Finetuning on PubMed dataset. ## To reproduce Steps to reproduce the behavior: 1. set DATA_DIR & OUT_DIR 2. Run command: python finetune.py \ --model_name_or_path=facebook/bart-large-cnn \ --learning_rate=3e-5 \ --gpus 1 \ --do_predict \ --do_train \ --n_val 1000 \ --val_check_interval 0.1 \ --sortish_sampler \ --max_target_length=80 \ --val_max_target_length=200 \ --test_max_target_length=200 \ --max_source_length=850 \ --train_batch_size=1 \ --eval_batch_size=1 \ --data_dir=$DATA_DIR \ --output_dir=$OUT_DIR \ --logger=wandb \ $@ <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The model is not learning anything. The generated eval summaries are identical throughout training and identical across training instances with different hyperparams. Appears as though backprop is not happening but there is not error message. (Probably I am missing something simple) ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-5.0.0-37-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
06-25-2020 11:59:11
06-25-2020 11:59:11
Seems to be coming from the --warmup_steps arg. If I set to 0, the model seems to learn as expected. If it is > 0 the model doesn't learn anything.<|||||>Interesting. Could you link me to that dataset/instructions? I just reran cnn_dm with warmup_steps=0 and my loss curve looks very similar in wandb. pink is no warmup: ![image](https://user-images.githubusercontent.com/6045025/85876097-38e05680-b7a3-11ea-9c39-7d9c572cd4be.png) Anyways, I will set default warmup_steps=0. <|||||>Hmmm interesting, I agree yours both look normal. I am using Pubmed dataset and truncating input docs at 850 input tokens. I'm pretty sure my model wasn't learning anything when warmuo_steps>0 for the following reasons: - The generated summaries were identical across training - The target summaries begin with '\<S\>' token but I believe the CNN data the model was finetuned on didn't. As a result, the model learns to begin output summaries with '\<S\>' very quickly. However, for each case with warmup_steps > 0 the generated val summaries didn't begin with this token - Loss curves below (orange: warmup_steps>0, grey: warmup_steps=0): - When warmup_steps>0 model followed identical loss curves even after a changed hparams like learning rate <img width="561" alt="Screenshot 2020-06-26 at 16 56 37" src="https://user-images.githubusercontent.com/51463426/85877481-22e88b00-b7cf-11ea-8b21-11320c0737b4.png"> I played around with it quite a bit and this pattern repeated for any warmp_steps>0 (including =1).<|||||>That is pretty compelling evidence. I will change the default to 0. Nothing else changed between runs? <|||||>Yes- I have tried across runs changing only this hparam.<|||||>Interesting. I fixed master. Would love to hear your experimental results/developer experience as you continue!<|||||>For sure, I'll pop in this library lots over the coming weeks I'm sure. Great library btw
transformers
5,270
closed
Create README.md
Create README.md for finance-koelectra-base-discriminator model
06-25-2020 09:27:47
06-25-2020 09:27:47
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=h1) Report > Merging [#5270](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e1fce3c0129d05b65a83cdd89e8eadded553f2e&el=desc) will **decrease** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5270/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5270 +/- ## ========================================== - Coverage 79.11% 79.08% -0.03% ========================================== Files 138 138 Lines 24080 24080 ========================================== - Hits 19050 19043 -7 - Misses 5030 5037 +7 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.44% <0.00%> (-1.18%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.77% <0.00%> (ø)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `77.18% <0.00%> (+0.76%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=footer). Last update [0e1fce3...67157e9](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,269
closed
Fix LR decay in TF Trainer
Revival of #5051 I have accidentally deleted the previous branch so I recreated this same one.
06-25-2020 09:26:53
06-25-2020 09:26:53
@LysandreJik Sorry for the inconvenience :(<|||||>Will merge once the code quality passes<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=h1) Report > Merging [#5269](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e1fce3c0129d05b65a83cdd89e8eadded553f2e&el=desc) will **decrease** coverage by `0.07%`. > The diff coverage is `6.25%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5269/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5269 +/- ## ========================================== - Coverage 79.11% 79.03% -0.08% ========================================== Files 138 138 Lines 24080 24102 +22 ========================================== - Hits 19050 19049 -1 - Misses 5030 5053 +23 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `17.85% <6.25%> (-0.84%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.92% <0.00%> (+0.14%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=footer). Last update [0e1fce3...78562e2](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>All tests are ok!
transformers
5,268
closed
Fix LR decay in TF Trainer
Revival of #5051 I have accidentally deleted the previous branch so I recreated this same one.
06-25-2020 09:11:13
06-25-2020 09:11:13
transformers
5,267
closed
ValueError in T5 community colab notebook.
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): T5 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the community notebook https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb: * It raises subclassing error while defining `class T2TDataCollator(DataCollator)`. DataCollator is not a class anymore on the latest master branch, just `class T2TDataCollator()` fixes the error. * Even that bug is fixed, the notebook raises another ValueError while importing dataset.pt: ``` Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "<ipython-input-18-4f8aea5d9d8b>", line 191, in _mp_fn main() File "<ipython-input-18-4f8aea5d9d8b>", line 145, in main train_dataset = torch.load(data_args.train_file_path) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 589, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 847, in _load result = unpickler.load() File "/usr/local/lib/python3.6/dist-packages/nlp/splits.py", line 493, in __setitem__ raise ValueError("Cannot add elem. Use .add() instead.") ValueError: Cannot add elem. Use .add() instead. ``` ## To reproduce Steps to reproduce the behavior: 1. Run the community colab notebook: https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb 2. You will get the error. ## Expected behavior * Not applicable ## Environment info * It's running on colab.
06-25-2020 04:54:45
06-25-2020 04:54:45
hi @y-rokutan, you are right, `DataCollator` is not a `Class` anymore, its a `callable` now, so remove subclassing and change `collate_batch` method to ` __call__` I'll update the notebook once new version of transformers is released<|||||>hi @patil-suraj, thx for your quick reply. I appreciate your contributions. Do you have any idea about ValueError issue? I'm googling for fix but still have the same error.<|||||>What is your nlp version ? Try using nlp==0.2.0<|||||>`!pip install -U nlp==0.2.0` worked! nlp0.3.0 seems not for this notebook. Thx!<|||||>hi i am running the notebook without any changes . But its very slow i think its not using TPU at all
transformers
5,266
closed
Finetune T5 on other Dataset, AssertionError: assert tokenized.input_ids.shape[1] == max_length
# ❓ Questions & Help ## Details I was trying to finetune Amazon Food Review Dataset on T5 using **Latest code from master**. I formatted the data, as .source and .target. When running [finetune_t5.sh](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py), First it gives error : **Keyword arguments {'add_prefix_space': True} not recognized.** And then : `File "/content/transformers/examples/summarization/utils.py", line 51, in encode_file assert tokenized.input_ids.shape[1] == max_length AssertionError` Could you please say, whether it is a bug or I am doing something wrong?
06-25-2020 04:42:18
06-25-2020 04:42:18
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,265
closed
test_torch_fillmask failing on GPU
```bash FAILED tests/test_modeling_bart.py::MBartIntegrationTests::test_enro_forward expected_slice = torch.tensor([9.0078, 10.1113, 14.4787], device=torch_device, dtype=model.dtype) result_slice = logits[0][0][:3] > self.assertTrue(torch.allclose(expected_slice, result_slice, atol=TOLERANCE)) E AssertionError: False is not true tests/test_modeling_bart.py:258: AssertionError FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_fill_mask_results tests/test_pipelines.py:147: in _test_mono_column_pipeline set([o[key] for o in result]), set([o[key] for o in expect]), E AssertionError: Items in the first set but not the second: E '<s>My name is John</s>' E '<s>My name is Chris</s>' E Items in the second set but not the first: E '<s> My name is John</s>' E '<s> My name is:</s>' ``` cc @julien-c I'll figure it out.
06-25-2020 03:35:54
06-25-2020 03:35:54
note to self: enro also fails on cpu, and rerunning on commit where it was green now fails. So change is something S3 related. <|||||>```python tests/test_pipelines.py:148: in _test_mono_column_pipeline set([o[key] for o in result]), set([o[key] for o in expect]), E AssertionError: Items in the first set but not the second: E '<s>My name is Chris</s>' E '<s>My name is John</s>' E Items in the second set but not the first: E '<s> My name is John</s>' E '<s> My name is:</s>' ```
transformers
5,264
closed
Set the number of times to evaluate per epoch when using Trainer
Basically, I want to see how the training is going against the evaluation dataset twice per epoch when using `Trainer`. To calculate `logging_steps` so that `evaluate()` is called a certain number of times per epoch when training, will this hold-up to single and multi-device training? ``` evals_per_epoch = 2 logging_steps = len(train_dataset) / train_batch_size / gradient_accumulation_steps // evals_per_epoch ``` I'm getting myself confused over the implementation of `global_steps` and `logging_steps` inside of `Trainer.train`.
06-25-2020 02:27:12
06-25-2020 02:27:12
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,263
closed
[examples] Verify marian and mbart BLEU scores with examples/seq2seq/run_eval.py
Got MBART 26.8 before postprocessing on wmt_en_ro, 37.1 after. 6 minutes. (first # same as fairseq, second should be also). Marian: 27.7/37.4 90 Seconds
06-25-2020 01:33:46
06-25-2020 01:33:46
Surprised. Maybe the opus data overlaps the wmt test set?<|||||>![image](https://user-images.githubusercontent.com/6045025/90946657-faf75b80-e3fc-11ea-856a-dc598d992112.png) <|||||>Opus #s are legit!<|||||>@sshleifer do you using the default configuration in Readme ? I didn't get 37.1, and I have tried the configuration mentioned as original paper, I have got a 36.6 after postprocessing wmt_en_ro enro. In addition, I have a confusion about the code. If I don't add extra parameters, there is a warning in the output log. I try to solve it. I can't set a breakpoint at the place where the warning is generated. It's difficult for me to locate the reason. Can you help me look at it? `Keyword arguments {'add_prefix_space': False} not recognized. `<|||||>(1) Which split, val or test? (2) What was the score before post-processing? The warning is from [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L261), you can safely ignore it. <|||||>### Results on wmt_en_ro/test ```bash python run_eval.py Helsinki-NLP/opus-mt-en-ro wmt_en_ro/test.source enro_test_translations.txt --reference_path wmt_en_ro/test.target - -task translation --score_path mar_test_bleu.json --fp16 --bs 64 {'bleu': 27.6865, 'n_obs': 1999, 'runtime': 85, 'seconds_per_sample': 0.0425} ``` ```bash ro_post_process enro_finetune/test_generations.txt wmt_en_ro/test.target # 37.4 ``` ### Postprocessing Setup ```bash cd $HOME git clone [email protected]:moses-smt/mosesdecoder.git cd mosesdecoder git clone [email protected]:rsennrich/wmt16-scripts.git ro_post_process () { sys=$1 ref=$2 export MOSES_PATH=$HOME/mosesdecoder REPLACE_UNICODE_PUNCT=$MOSES_PATH/scripts/tokenizer/replace-unicode-punctuation.perl NORM_PUNC=$MOSES_PATH/scripts/tokenizer/normalize-punctuation.perl REM_NON_PRINT_CHAR=$MOSES_PATH/scripts/tokenizer/remove-non-printing-char.perl REMOVE_DIACRITICS=$MOSES_PATH/wmt16-scripts/preprocess/remove-diacritics.py NORMALIZE_ROMANIAN=$MOSES_PATH/wmt16-scripts/preprocess/normalise-romanian.py TOKENIZER=$MOSES_PATH/scripts/tokenizer/tokenizer.perl lang=ro for file in $sys $ref; do cat $file \ | $REPLACE_UNICODE_PUNCT \ | $NORM_PUNC -l $lang \ | $REM_NON_PRINT_CHAR \ | $NORMALIZE_ROMANIAN \ | $REMOVE_DIACRITICS \ | $TOKENIZER -no-escape -l $lang \ > $(basename $file).tok done # compute BLEU cat $(basename $sys).tok | sacrebleu -tok none -s none -b $(basename $ref).tok } ``` ```bash ro_post_process enro_test_translations.txt wmt_en_ro/test.target ``` <|||||>@sshleifer That's warning can be sefaly ignore it, that is great, I have try to figure out the reason for occuring. I using the test set, I have got 21.9 before post-processing. I'm using this postprocessing setup and run_eval.py, My analysis is that my model performance is not enough, maybe the fine-tuning parameters are inaccurate, or there are too many training steps during fine-tuning. I am using this version with following code link. https://github.com/huggingface/transformers/blob/b9772897ec9f54c1a83263b059bfd37acda936d5/examples/seq2seq/finetune.py#L371 this has been changed in lastest version. https://github.com/huggingface/transformers/blob/9e89390ce1e785e72452207139a334cd3bf745ff/examples/seq2seq/finetune.py#L396 I am not a native English speaker, If my words is unclear, you can let me repeat it in time. <|||||>I don't understand what code you ran, what you expected to happen, and what happened. The change you highlight should not affect your results.<|||||>my last comment has two question. the first question is my results can not be 37.7. the second question is I set save_top_k ==-1, I have solved it the follows is my runing parameters for the first question. `bash train_mbart_cc25_enro_pap.sh --output_dir $OUTPUT_DIR --gpus 1 --sortish_sampler` where train_mbart_cc25_enro_pap.sh as follows : ``` BS=16 MAX_LEN=128 python finetune.py \ --learning_rate=3e-5 \ --do_train \ --val_check_interval=0.25 \ --adam_eps 1e-06 \ --num_train_epochs 4 --src_lang en_XX --tgt_lang ro_RO \ --data_dir $ENRO_DIR \ --max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \ --train_batch_size=$BS --eval_batch_size=$BS \ --task translation \ --warmup_steps 2500 \ --freeze_embeds \ --model_name_or_path=$MODEL_PATH \ --label_smoothing 0.2 \ --dropout 0.3 \ "$@" ```<|||||>that's my transformers report bleu as follows, 21.9 is using sacrebleu before postprocessing. @sshleifer ```{'bleu':` 26.0506, 'n_obs': 1999, 'runtime': 681, 'seconds_per_sample': 0.3407}``` and I have try to using the configuration as README with using `--fp16`, and I get a error log as follows : ``` 2020-09-15 15:58:46.000 [INFO] [Driver] RuntimeError: CUDA out of memory. Tried to allocate 490.00 MiB (GPU 0; 31.75 GiB total capacity; 29.17 GiB already allocated; 241.44 MiB free; 30.35 GiB reserved in total by PyTorch) ``` and I have see the README mentioned as follows > This should take < 6h/epoch on a 16GB v100 and achieve test BLEU above 26 To get results in line with fairseq, you need to do some postprocessing. <|||||>+ 26.06 is definitely expected behavior, as the README indicates. + The best I've ever scored from finetuning is 26.8 after training for 6 epochs. (Took 24h). + `--save_top-k=-1`, I can look into. + What command did you run with `sacrebleu` to get 21.9? The HF run_eval.py uses the `sacrebleu.corpus_bleu` python function. + I would recommend playing with the Helsinki-NLP/ models for faster MT finetuning.<|||||>> * 26.06 is definitely expected behavior, as the README indicates. > * The best I've ever scored from finetuning is 26.8 after training for 6 epochs. (Took 24h). > * `--save_top-k=-1`, I can look into. > * What command did you run with `sacrebleu` to get 21.9? The HF run_eval.py uses the `sacrebleu.corpus_bleu` python function. > * I would recommend playing with the Helsinki-NLP/ models for faster MT finetuning. @sshleifer Hi , Thank you for your kindness help. I have got a closer score 37.2 to 37.7, I have a curious question, why is `Helsinki-NLP/ models` faster ? <|||||>They are way smaller: 74 million parameters (Marian) vs 610M (mBART)
transformers
5,262
closed
Cannot control wandb metadata when running examples/seq2seq/finetune.py
There is no easy way to control the wandb project_name kwarg. How can we facilitate this without massive number of command line args?
06-25-2020 01:31:31
06-25-2020 01:31:31
You should be able to do it with environment variables. See [these docs](https://docs.wandb.com/library/environment-variables) Let me know if that solves the issue. The reason of the current implementation (and auto init of wandb) was to be able to instrument all example scripts without having to add custom wandb code. There is [documentation on W&B related to transformers](https://docs.wandb.com/library/integrations/huggingface) that probably need to be updated to add this detail. We should probably also find a place to add documentation related to W&B integration in transformers repo. Let me know if I can be of any help.<|||||>You could make `examples/wandb.md` with information and link to it from `examples/README.md`? Or just link to [this](https://docs.wandb.com/library/integrations/huggingface) in `examples/README.md` For the lightning `WandbLogger` integration, does `$WANDB_PROJECT` take precedence over passing `WandbLogger(project='project_name')`?<|||||>Thanks, that's a great idea! For the lightning integration, any parameter you pass explicitly in `WandbLogger` should take precedence. <|||||>@sshleifer it seems that the pytorch-lightning integration is commonly used so I also added a reference to it in my PR. Just curious, is it because you cannot do distributed computing with `Trainer` on multiple machines? Would it make sense to add an easier integration on `lightning_base` with a similar logic to `Trainer` and `TFTrainer`, ie by using `WandbLogger` on `generic_train` whenever wandb is installed and logged in (and ignore it otherwise)?<|||||>As long as we can avoid (1) generating a lot of logs when I run the unittests on my local machine (2) enabling gradient watching by default That sounds like a good change to me. @patil-suraj what do you think? do you use wandb?<|||||>Yes, adding wandb integration on `lightning_base` makes sense to me given wandb is enabled by default in `Trainer` and `TFTraner` , this will enable wandb logging for `run_pl_ner` and `run_pl_glue` With `Trainer` I can disable logging when I'm testing using `WANDB_DISABLED` env variable and gradient watching by setting `WANDB_WATCH` . env variables should avoid excessive command line args<|||||>Thanks for the feedback @sshleifer and @patil-suraj I can try to propose something similar for `lightning_base`. I'll wait for PR #5607 to be closed (feel free to comment if you think it's missing details) and I'll update the README accordingly when adding this functionality.<|||||>Awesome. I had no idea that PR existed. Tag me next time and sorry for being so slow.
transformers
5,261
closed
[proposal] Move tests/utils.py to src/transformers/testing_utils.py so that examples can import
for both groups of tests, the import would be ```python from transformers.testing_utils import slow ``` Motivation: I was about to rewrite the @slow decorator today and felt that this was cleaner. Any objections? @julien-c @LysandreJik @thomwolf @sgugger @patrickvonplaten @mfuntowicz @anyoneelseimforgetting
06-25-2020 01:13:04
06-25-2020 01:13:04
I think that would be cool. Since we're testing the examples, it makes sense to not duplicate the exact same code.<|||||>Sounds good to me
transformers
5,260
closed
BertTokenizerFast does not support `pad_to_max_length` argument
# 🐛 Bug The fast tokenizer has different behavior from the normal tokenizer. ```python from transformers import BertTokenizer, BertTokenizerFast BertTokenizer.from_pretrained("bert-base-uncased").encode("hello world", max_length=128, pad_to_max_length="right") # succeeds BertTokenizerFast.from_pretrained("bert-base-uncased").encode("hello world", max_length=128, pad_to_max_length="right") *** TypeError: enable_padding() got an unexpected keyword argument 'max_length' ``` ## Environment info - `transformers` version: 2.11.0 - `tokenizers` version: 0.8.0rc3 - Platform: Ubuntu 18.04 - Python version: 3.7
06-25-2020 00:25:15
06-25-2020 00:25:15
Hi @jarednielsen, if you installed from source then padding is handled in a different way. You'll need to use the newly added `padding` argument. According to the docs `padding` (:obj:`Union[bool, str]`, `optional`, defaults to :obj:`False`): Activate and control padding. Accepts the following values: * `True` or `'longest'`: pad to the longest sequence in the batch (or no padding if only a single sequence if provided), * `'max_length'`: pad to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`) * `False` or `'do_not_pad'` (default): No padding (i.e. can output batch with sequences of uneven lengths) <|||||>Yes, this works on master (both the old and new tokenizer API) and should work in the new release that will be out very soon.<|||||>Thank you for the quick response! Reading https://github.com/huggingface/transformers/pull/4510 makes it much clearer.<|||||>Yes, we even have a nice tutorial on the new tokenizer API now thanks to the amazing @sgugger: https://huggingface.co/transformers/master/preprocessing.html
transformers
5,259
closed
Create README.md
06-24-2020 23:39:16
06-24-2020 23:39:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=h1) Report > Merging [#5259](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d12ceb48bad126768e44d2bd958fa7638abd0f16&el=desc) will **decrease** coverage by `0.03%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5259/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5259 +/- ## ========================================== - Coverage 79.10% 79.07% -0.04% ========================================== Files 138 138 Lines 24073 24073 ========================================== - Hits 19043 19035 -8 - Misses 5030 5038 +8 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5259/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5259/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.89% <0.00%> (-0.89%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=footer). Last update [d12ceb4...39fef05](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for sharing 🤗 [model page](https://huggingface.co/moumeneb1/flaubert-base-cased-ecology_crisis)
transformers
5,258
closed
save_pretrained: mkdir(exist_ok=True)
Old Logic: if you pass something that is not a pre-existing directory -> Error New Logic: if you pass a file that exists -> Error. if you pass a path that doesn't exist -> we call `mkdir path`, no error. if you pass an existing directory -> no error. This is not a breaking change, since no calls that previously succeeded produce different results. Costs: - you might occasionally make a directory called `pytorch_model.bin` for a confused user. - a little bit of error checking code Benefits: - fewer late failures during training because you forgot to mkdir. This happens to me a lot. - It feels like the spirit of the lib is to make NLP easier for the user, and I think this change is a small step in that direction. Feedback much appreciated, I want to see if people like this before I add tests.
06-24-2020 22:22:13
06-24-2020 22:22:13
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=h1) Report > Merging [#5258](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efae6645e223f29cf05eeafe95105a9f869b66dd&el=desc) will **decrease** coverage by `0.51%`. > The diff coverage is `50.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5258/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5258 +/- ## ========================================== - Coverage 77.69% 77.17% -0.52% ========================================== Files 138 138 Lines 24291 24300 +9 ========================================== - Hits 18872 18754 -118 - Misses 5419 5546 +127 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.13% <0.00%> (-0.32%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.39% <50.00%> (-0.10%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.93% <50.00%> (-0.21%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.39% <66.66%> (-1.32%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <100.00%> (+0.03%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: | | ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=footer). Last update [1af58c0...ac579a6](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I'm favorable to this (I think I wanted to do it some time ago) If we end up merging this, we should also clean up a lot of `os.makedir` calls "upstream" to this (example scripts, etc.)<|||||>Awesome, I'll grep for makedirs and mkdir and see what I can delete tomorrow.<|||||>Great change! <|||||>@thomwolf why did you guys put `logger.error` instead of the raising normal exceptions in the python tokenizer file?  Should I raise a `NotADirectoryException` if a save path is mis-specified as a file or keep the `logger.error`, `return None` logic? This will be for calls like `tokenizer.save_pretrained("tokenizer.json")`<|||||>You should also clean up some calls to `os.makedir` in the Trainer, I think?
transformers
5,257
closed
Tokenization tutorial
This takes most of the description in #4510 and organizes it as a tokenizer tutorial for section 2 of the documentation. Preview is [here](https://52645-155220641-gh.circle-artifacts.com/0/docs/_build/html/preprocessing.html).
06-24-2020 21:16:54
06-24-2020 21:16:54
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=h1) Report > Merging [#5257](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0148c262e79f5ca12140d7fc35a6d3e0d80d5d3b&el=desc) will **decrease** coverage by `0.90%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5257/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5257 +/- ## ========================================== - Coverage 79.01% 78.10% -0.91% ========================================== Files 138 138 Lines 24064 24064 ========================================== - Hits 19013 18795 -218 - Misses 5051 5269 +218 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `49.30% <0.00%> (-42.12%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-2.07%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <0.00%> (-0.13%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <0.00%> (+0.94%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=footer). Last update [0148c26...7bf3c46](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,256
closed
RobertaTokenizerFast produces a different output than RobertaTokenizer
# 🐛 Bug `RobertaTokenizerFast.tokenize()` produces a different output than `RobertaTokenizer.tokenize()`. I am not sure if this is an issue that will impact model performance. Is this intended? I assumed the fast tokenizers should be consistent with the normal ones in terms of outputs. ## Information Model I am using (Bert, XLNet ...): Roberta Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ```python from transformers import RobertaTokenizer, RobertaTokenizerFast tokenizer = RobertaTokenizer.from_pretrained("roberta-base") tokens = tokenizer.tokenize("This is a test. </s> <s> Another one. </s> <s> Yet another one.") print("Normal Tokens: " + str(tokens)) ids = tokenizer.convert_tokens_to_ids(tokens) print("Normal IDs: " + str(ids)) tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base") tokens = tokenizer.tokenize("This is a test. </s> <s> Another one. </s> <s> Yet another one.") print("Fast Tokens: " + str(tokens)) ids = tokenizer.convert_tokens_to_ids(tokens) print("Fast IDs: " + str(ids)) ``` Output: ``` Normal Tokens: ['This', 'Ġis', 'Ġa', 'Ġtest', '.', '</s>', '<s>', 'ĠAnother', 'Ġone', '.', '</s>', '<s>', 'ĠYet', 'Ġanother', 'Ġone', '.'] Normal IDs: [713, 16, 10, 1296, 4, 2, 0, 2044, 65, 4, 2, 0, 3507, 277, 65, 4] Fast Tokens: ['ĠThis', 'Ġis', 'Ġa', 'Ġtest', '.', 'Ġ', '</s>', 'Ġ', '<s>', 'ĠAnother', 'Ġone', '.', 'Ġ', '</s>', 'Ġ', '<s>', 'ĠYet', 'Ġanother', 'Ġone', '.'] Fast IDs: [152, 16, 10, 1296, 4, 1437, 2, 1437, 0, 2044, 65, 4, 1437, 2, 1437, 0, 3507, 277, 65, 4] ``` Using `tokenizer.enocde()` instead of `tokenizer.convert_tokens_to_ids(tokenizer.tokenize())` solves the discrepancy with the first token but still inserts token id `1437` between `</s>` and `<s>`. ## Expected behavior `RobertaTokenizerFast` produces the same output as `RobertaTokenizer`. ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cu101 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
06-24-2020 20:52:59
06-24-2020 20:52:59
For records for the new users: Now, it seemed to have been resolved. I tried to reproduce this in my notebook but getting same results for both of them: <img width="1383" alt="image" src="https://github.com/huggingface/transformers/assets/104596164/bde951ac-8b9c-4996-966b-1d11f4c12d35">
transformers
5,255
closed
Fix first test
06-24-2020 19:15:59
06-24-2020 19:15:59
transformers
5,254
closed
Move GenerationMixin to separate file
This PR splits the `modeling_utils.py` and `modeling_tf_utils.py` by moving the code and methods related to generation to `modeling_generation.py` and `modeling_tf_generation.py` respectively. Both of these files were getting pretty long, with the code dedicated to generation taking about 1000 LOC in each while being completely disjoint from the rest. This re-organization should make the code easier to read and contribute to. There are no functional changes, I literally just created a new Mixin class for each and moved the functions as is.
06-24-2020 19:11:48
06-24-2020 19:11:48
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=h1) Report > Merging [#5254](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `0.52%`. > The diff coverage is `77.79%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5254/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5254 +/- ## ========================================== + Coverage 77.01% 77.53% +0.52% ========================================== Files 128 140 +12 Lines 21615 24334 +2719 ========================================== + Hits 16646 18868 +2222 - Misses 4969 5466 +497 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | | | [src/transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | | | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | | | ... and [159 more](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=footer). Last update [482a599...356e825](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome, very much in favor of this change! <|||||>I changed the name and mentioned the intended child class in the comment as suggested by @sshleifer I tried playing around with `importlib` to import `shape_list` dynamically from `modeling_tf_utils.py` but I'm getting stumped making it work with relative imports. Any suggestions / pointers @patrickvonplaten? I also didn't see `importlib` used anywhere else for similar purposes so I feel a little uneasy about bringing in additional machinery to the lib :) <|||||>> This is great! That cleans up the `modeling_(tf_)utils` a lot! > > I'm thinking that `modeling_generation_utils.py` and `modeling_tf_generation_utils.py` would probably be better names. When I see this I'm thinking that `generation` is another model, rather than a utility file. > > Pinging @thomwolf so he can give his opinion on introducing this mixin. Changed the names :) <|||||>I moved `shape_list` back to the main `modeling_tf_utils.py` (duplicated in `generation_tf_utils.py`) and renamed the files to Thom's suggestion. Should be ready to merge @LysandreJik !
transformers
5,253
closed
Use master _static
Make all doc versions use the _static from master to make sure they all have the same version controller, link to hugging face logo etc.
06-24-2020 19:00:41
06-24-2020 19:00:41
transformers
5,252
closed
[Tokenization] Fix #5181 - make #5155 more explicit - move back the default logging level in tests to WARNING
Padding to max sequence length while truncating to another length did not behave as expected on slow tokenizers as raised in #5181 by @sshleifer (it was truncating and then padding back to original length...) This PR adds more tests to cover various combinations of padding + truncation strategies. Fix #5181 This PR also: - make #5155 clearer by changing the assertion in a cleaner error message (until the data processors are refactored) - move back the default level of logging in tests to `logging.WARNING` - switch some really slow tokenizations tests on CPU (when the inputs go in the full models) to `@slow` and speed-up testing by limiting the max sequence length used for testing
06-24-2020 16:48:42
06-24-2020 16:48:42
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=h1) Report > Merging [#5252](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7ac91107119f95a9034e5404bd5af34355d0ffa5&el=desc) will **increase** coverage by `1.60%`. > The diff coverage is `86.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5252/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5252 +/- ## ========================================== + Coverage 77.48% 79.09% +1.60% ========================================== Files 138 138 Lines 24073 24071 -2 ========================================== + Hits 18653 19038 +385 + Misses 5420 5033 -387 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.48% <85.71%> (-1.00%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.16% <100.00%> (+0.34%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.35% <0.00%> (-0.42%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.41%)` | :arrow_up: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <0.00%> (+2.56%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+3.53%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.41% <0.00%> (+42.11%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=footer). Last update [7ac9110...230551f](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ok, this one is ready for review/merge. It fixes an important bug in the interaction of padding and truncation for slow tokenizers in the new backend. It also adds a lot of tests and tweaks a bit the tests to make them faster and less verbose.
transformers
5,251
closed
Fix version controller links (for realsies)
This time tested on pretty much all situations and add proper links as a result. In particular: - not sure that `location.toString()` will end with a '/' or not when on the index, this works with both - when nested, use the previous value and not an absolute one - the base url needs to be slice up to the version, including it for stable
06-24-2020 16:12:46
06-24-2020 16:12:46
transformers
5,250
closed
Fix tensor label type inference in default collator
Quick fix to `default_data_collator` allowing it to recognize the correct type of PT tensor label inputs rather than always casting them to float (#5060)
06-24-2020 15:55:48
06-24-2020 15:55:48
What's the perf impact of a try/catch?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=h1) Report > Merging [#5250](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/49f6e7a3c6729025e0d412ee19786c71811a6390&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5250/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5250 +/- ## ======================================= Coverage 77.96% 77.96% ======================================= Files 138 138 Lines 23886 23887 +1 ======================================= + Hits 18622 18623 +1 Misses 5264 5264 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5250/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `98.36% <100.00%> (+0.02%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5250/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5250/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=footer). Last update [49f6e7a...513dcb5](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Is it supposed to be slower? I ran a quick profile iterating over IMDb train set and saw no difference, but can manually check for `torch.tensor` instead if that's preferred.
transformers
5,249
closed
Distilroberta Tokenizer and Encoder not aligning
I working on a sequence tagging task where I to extract cause and effect sub spans for a given text. For example, extract <e1> and <e2> for the text below. `"<e2>The Sunshine State drew in a net influx of about $17.7 billion in adjusted gross income (AGI) - most of which (72 percent) came from those aged 55 and older.</e2> <e1>It is consistently one of the most popular destinations for retirees due to affordability and low taxes.</e1> Florida's $17.7 billion in net AGI dwarves the remaining 19 states that saw a positive net influx of income - which combined for a total of $19.4 billion."` For the model, I'm try to generate a BIO style tags that align with the tokenized input from the Distilroberta tokenizer. How I'm finding the after encoding the tokenized input, there is misalignment with the expected tags. ``` text = """The Sunshine State drew in a net influx of about $17.7 billion in adjusted gross income (AGI) - most of which (72 percent) came from those aged 55 and older. It is consistently one of the most popular destinations for retirees due to affordability and low taxes. Florida's $17.7 billion in net AGI dwarves the remaining 19 states that saw a positive net influx of income - which combined for a total of $19.4 billion.""" cause = 'It is consistently one of the most popular destinations for retirees due to affordability and low taxes.' effect = 'The Sunshine State drew in a net influx of about $17.7 billion in adjusted gross income (AGI) - most of which (72 percent) came from those aged 55 and older.' # Test that cause and effect are valid substring in text print(text.find(cause)) # 160 print(text.find(effect)) # 0 from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilroberta-base") # Convert cause into Tags cause_toks = tokenizer.tokenize(cause) cause_tags = ["B-cause"] + ["I-cause"] * (len(cause_toks) -1) # Convert effect into Tags effect_toks = tokenizer.tokenize(effect) effect_tags = ["B-cause"] + ["I-cause"] * (len(cause_toks) -1) # Convert text tokinzed string text_toks = tokenizer.tokenize(text) text_toks_string = " ".join(text_toks) text_toks_string = text_toks_string.replace(" ".join(cause_toks), " ".join(cause_tags)) text_toks_string = text_toks_string.replace(" ".join(effect_toks)," ".join(effect_tags)) text_toks = [tok if tok in ["B-cause", "I-cause", "B-effect", "I-effect"] else "O" for tok in text_toks_string.split()] #['B-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'O', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'] print("text toks len: ", len(text_toks) + 2) # include start and end tokens print("encoded text len: ", len(tokenizer.encode(text))) # text toks len: 77 # encoded text len: 100 ``` I also found I don't seem to get consistent behavior encoding a text and subspan seperately either. ``` text = "Cats are super coolio" subtext = "super coolio" print(tokenizer.encode(text)) # [0, 20913, 32, 2422, 3035, 1020, 2] print(tokenizer.encode(subtext, add_special_tokens=False))` # [16101, 3035, 1020] ``` Is my understanding on encode wrong? I thought encode converts the BPE tokens to numerical values and adds the cls and sep tokens according to the beginning and end. But it seems like something else is going on.
06-24-2020 15:55:30
06-24-2020 15:55:30
Dug a bit deeper. It seems encode doesn't deterministically tokenize on white-spaces. In the cat example: 16101 -> "super" and 2422 -> " super". Is there an option in encode to force white space splitting. I guess the hack is to tokenize each word separately but that rather inefficient <|||||>For the GPT2/Roberta tokenizers, the space before a word is part of the word which explain the discrepancy you see. You can set `add_prefix_space` at initialization, e.g. `tokenizer = AutoTokenizer.from_pretrained("distilroberta-base", add_prefix_space=True)` to always add a space before the text. Performances will be slightly lower as showed in https://github.com/huggingface/transformers/issues/3788 but you will get a consistent behavior when encoding a text and a subspan separately.<|||||>I'm not seeing the behavior you described @thomwolf . Perhaps I'm misunderstanding what add_prefix_space does. ``` tokenizer = AutoTokenizer.from_pretrained("distilroberta-base", add_prefix_space=True) text = "Cats are super coolio" subtext = "super coolio" print(tokenizer.encode(text, add_special_tokens=True)) print(tokenizer.encode(subtext, add_special_tokens=False)) #[0, 20913, 32, **2422, 3035, 1020**, 2] #[16101, 3035, 1020] ``` ``` tokenizer = AutoTokenizer.from_pretrained("distilroberta-base", add_prefix_space=False) text = "Cats are super coolio" subtext = "super coolio" print(tokenizer.encode(text, add_special_tokens=True)) print(tokenizer.encode(subtext, add_special_tokens=False)) # [0, 20913, 32, **2422, 3035, 1020**, 2] # [16101, 3035, 1020] ``` However, I add special tokens on the subtext, that does seem to work. It just requires extra line parse out the special characters during insertion but I can work with that. This behavior work regardless of whether add_prefix_space is enabled or not. ``` tokenizer = AutoTokenizer.from_pretrained("distilroberta-base") text = "Cats are super coolio" subtext = "super coolio" print(tokenizer.encode(text, add_special_tokens=True)) print(tokenizer.encode(subtext, add_special_tokens=True)) # [0, 20913, 32, 2422, 3035, 1020, 2] # [0, 2422, 3035, 1020, 2] ``` <|||||>I see. This is now fixed on master and will be in the next release. Here is the current behavior with your examples: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilroberta-base", add_prefix_space=True) text = "Cats are super coolio" subtext = "super coolio" print(tokenizer.encode(text, add_special_tokens=True)) [0, 20913, 32, 2422, 3035, 1020, 2] print(tokenizer.encode(subtext, add_special_tokens=False)) [2422, 3035, 1020] tokenizer = AutoTokenizer.from_pretrained("distilroberta-base", add_prefix_space=False) print(tokenizer.encode(text, add_special_tokens=True)) [0, 347, 2923, 32, 2422, 3035, 1020, 2] print(tokenizer.encode(subtext, add_special_tokens=False)) [16101, 3035, 1020] ``` You can also confirm the behavior with the tokens (the prefix space of the word is this `Ġ` in GPT2/Roberta tokenizers): ```python tokenizer = AutoTokenizer.from_pretrained("distilroberta-base", add_prefix_space=True) tokenizer.tokenize(text) ['ĠCats', 'Ġare', 'Ġsuper', 'Ġcool', 'io'] tokenizer.tokenize(subtext) ['Ġsuper', 'Ġcool', 'io'] tokenizer = AutoTokenizer.from_pretrained("distilroberta-base", add_prefix_space=False) tokenizer.tokenize(text) ['C', 'ats', 'Ġare', 'Ġsuper', 'Ġcool', 'io'] tokenizer.tokenize(subtext) ['super', 'Ġcool', 'io'] ```<|||||>@thomwolf Thanks! I'll go ahead close this issue. Appreciate the quick turnaround and fix.
transformers
5,248
closed
Fix links in version selector
This fixes the links provided by the version selector.
06-24-2020 15:30:32
06-24-2020 15:30:32
transformers
5,247
closed
[WIP] Support label_smoothed_cross_entropy
By default there is no implementation of cross entropy with soft labels in Pytorch as discussed in #5168 I found a feature request in pytorch for it but it is still not done. https://github.com/pytorch/pytorch/issues/7455 There is also discussion where i found an implemented version of this loss. I checked that it performs the same as nn.CrossEntropyLoss given smoothing=0.0 and accurately smoothes labels given nonzero smoothing. I ported it with small refactoring. Key improvements that are planned in this PR: 1) Add label smoothing cross entropy loss 2) support and parametrise loss choice in `finetune.py`
06-24-2020 15:11:07
06-24-2020 15:11:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=h1) Report > Merging [#5247](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aa6a29bc25b663e1311c5c4fb96b004cf8a6d2b6&el=desc) will **increase** coverage by `0.38%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5247/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5247 +/- ## ========================================== + Coverage 77.92% 78.30% +0.38% ========================================== Files 137 137 Lines 23475 23475 ========================================== + Hits 18292 18383 +91 + Misses 5183 5092 -91 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `95.18% <0.00%> (+0.37%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.28% <0.00%> (+0.82%)` | :arrow_up: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=footer). Last update [aa6a29b...4d63b64](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>The one thing i dont understand is where to initialise this loss in `finetune.py`. As i see now there is now no loss instances in `finetune.py`, the loss function comes from given model and initialised during modelling_*.py<|||||>This PR seems outdated. The issue has been resolved with PR #5919<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,246
closed
Fix deploy doc
Try to update the master doc like the stable docs to see if this fixes the problem of things not being copied over.
06-24-2020 14:56:48
06-24-2020 14:56:48
transformers
5,245
closed
[Benchmarks] improve Example Plotter
This PR make it possible to plot csv files that have "N/A" values.
06-24-2020 14:42:25
06-24-2020 14:42:25
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=h1) Report > Merging [#5245](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9fe09cec76efa1e221c3fd6eb8520ba0a911f092&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5245/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5245 +/- ## ========================================== - Coverage 77.93% 77.92% -0.01% ========================================== Files 138 138 Lines 23860 23859 -1 ========================================== - Hits 18595 18593 -2 - Misses 5265 5266 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/benchmark/benchmark\_args\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5245/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-0.24%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5245/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (-0.15%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=footer). Last update [9fe09ce...882b09d](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,244
closed
Add some prints to debug deploy script
06-24-2020 14:33:01
06-24-2020 14:33:01
transformers
5,243
closed
Don't recreate old docs
Change the check to look at a directory on the doc hosts instead of CircleCI to avoid creating old docs at each commit.
06-24-2020 13:37:40
06-24-2020 13:37:40
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=h1) Report > Merging [#5243](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/173528e3685bc4321630b7f979d01896c57a5c15&el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5243/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5243 +/- ## ========================================== + Coverage 77.96% 77.99% +0.02% ========================================== Files 138 138 Lines 23839 23839 ========================================== + Hits 18586 18593 +7 + Misses 5253 5246 -7 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5243/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5243/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.15% <0.00%> (+0.29%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5243/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=footer). Last update [173528e...80b87e8](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,242
closed
[Benchmark] fix print in benchmark
Tiny change pretty print results.
06-24-2020 13:33:29
06-24-2020 13:33:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=h1) Report > Merging [#5242](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9fe09cec76efa1e221c3fd6eb8520ba0a911f092&el=desc) will **decrease** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5242/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5242 +/- ## ========================================== - Coverage 77.93% 77.91% -0.03% ========================================== Files 138 138 Lines 23860 23860 ========================================== - Hits 18595 18590 -5 - Misses 5265 5270 +5 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.84% <ø> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.44% <0.00%> (-1.18%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (-0.15%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=footer). Last update [9fe09ce...911baec](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,241
closed
[Benchmark] Extend Benchmark to all model type extensions
This PR does the following changes: 1) - The default model class to benchmark is the one that can be found under config.architectures 2) - Improve plotting file
06-24-2020 12:49:13
06-24-2020 12:49:13
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=h1) Report > Merging [#5241](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1ae132a07d7f294cf58cd50f7db8723d00e282de&el=desc) will **increase** coverage by `0.44%`. > The diff coverage is `40.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5241/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5241 +/- ## ========================================== + Coverage 77.49% 77.93% +0.44% ========================================== Files 138 138 Lines 23787 23806 +19 ========================================== + Hits 18433 18554 +121 + Misses 5354 5252 -102 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <33.33%> (-5.12%)` | :arrow_down: | | [src/transformers/benchmark/benchmark\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `79.81% <37.50%> (-2.88%)` | :arrow_down: | | [src/transformers/benchmark/benchmark\_args\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.36% <100.00%> (+0.23%)` | :arrow_up: | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.84% <100.00%> (+0.07%)` | :arrow_up: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (+0.29%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=footer). Last update [1ae132a...a007369](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,240
closed
[WIP] Add 🤗nlp in examples using the updated tokenizer API
This PR superseded #4864 This PR examines how to best make use of all the features of 🤗nlp in the examples. First example studied is GLUE. The main goal is to have explicit data processing (target: no data processing happening inside transformers) as well as add some efficiency features like dynamic/optimized batching. The second goal is to make this a lot more efficient, fast, and reproducible.
06-24-2020 12:48:16
06-24-2020 12:48:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=h1) Report > Merging [#5240](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7c41057d5090f5e665f2404878369ecb13939def&el=desc) will **decrease** coverage by `1.27%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5240/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5240 +/- ## ========================================== - Coverage 78.34% 77.07% -1.28% ========================================== Files 138 138 Lines 23841 23841 ========================================== - Hits 18679 18376 -303 - Misses 5162 5465 +303 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `19.92% <0.00%> (-75.00%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (-0.92%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.50% <0.00%> (-0.32%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-0.30%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=footer). Last update [7c41057...9df4881](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,239
closed
Multilingual MNLI model
Hi, I'm trying Zero-Shot Learning with 'facebook/bart-large-mnli' and it's pretty well, but I want to do it in Spanish. Is there any multilingual mnli model? Thanks for your attention!
06-24-2020 12:13:35
06-24-2020 12:13:35
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,238
closed
Not Implemented Error
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): TFBert Language I am using the model on (English, Chinese ...): Any The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ## To reproduce I am trying to run the Bert model on a sequence of sentences. Steps to reproduce the behavior: ``` import tensorflow as tf from transformers import BertTokenizer, TFBertModel inputs = tf.keras.Input(shape=(50, 64), dtype='int32') model = TFBertModel.from_pretrained('bert-base-uncased') outputs = tf.keras.layers.TimeDistributed(model)(inputs) ``` I get a not implemented error, NotImplementedError Traceback (most recent call last) <ipython-input-5-631f3cd2e8b2> in <module> ----> 1 outputs = tf.keras.layers.TimeDistributed(model)(inputs) The same code works fine for ``` inputs = tf.keras.Input(shape=(10, 128, 128, 3)) conv_2d_layer = tf.keras.layers.Conv2D(64, (3, 3)) outputs = tf.keras.layers.TimeDistributed(conv_2d_layer)(inputs) outputs.shape ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I should be able to run the bert model on the sequence of sentences. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.0.0 - Platform: Linux-5.0.0-1028-azure-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.8 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): 2.0.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
06-24-2020 12:11:15
06-24-2020 12:11:15
This is likely due because you are using a wrong version of Tensorflow. Can you run `transformers-cli env`, as the template suggests, and place the result here?<|||||>Can you post the full stack trace? It's hard to debug this way.<|||||>@BramVanroy for your reference ``` --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-2-db5a2030f0d4> in <module> 3 inputs = tf.keras.Input(shape=(50, 64), dtype='int32') 4 model = TFBertModel.from_pretrained('bert-base-uncased') ----> 5 outputs = tf.keras.layers.TimeDistributed(model)(inputs) /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs) 840 not base_layer_utils.is_in_eager_or_tf_function()): 841 with auto_control_deps.AutomaticControlDependencies() as acd: --> 842 outputs = call_fn(cast_inputs, *args, **kwargs) 843 # Wrap Tensors in `outputs` in `tf.identity` to avoid 844 # circular dependencies. /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/layers/wrappers.py in call(self, inputs, training, mask) 254 y = self.layer(inputs, **kwargs) 255 # Shape: (num_samples, timesteps, ...) --> 256 output_shape = self.compute_output_shape(input_shape).as_list() 257 output_shape = self._get_shape_tuple( 258 (-1, input_length), y, 1, output_shape[2:]) /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/layers/wrappers.py in compute_output_shape(self, input_shape) 208 child_input_shape = tensor_shape.TensorShape([input_shape[0]] + 209 input_shape[2:]) --> 210 child_output_shape = self.layer.compute_output_shape(child_input_shape) 211 if not isinstance(child_output_shape, tensor_shape.TensorShape): 212 child_output_shape = tensor_shape.TensorShape(child_output_shape) /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/network.py in compute_output_shape(self, input_shape) 710 def compute_output_shape(self, input_shape): 711 if not self._is_graph_network: --> 712 return super(Network, self).compute_output_shape(input_shape) 713 714 # Convert any shapes in tuple format to TensorShapes. /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in compute_output_shape(self, input_shape) 637 'layer (%s).' % self.__class__.__name__) 638 return nest.map_structure(lambda t: t.shape, outputs) --> 639 raise NotImplementedError 640 641 @doc_controls.for_subclass_implementers NotImplementedError: ```<|||||>Could this be caused by the fact that TFBertModel returns a tuple (hidden states, pooled output)? In @4rshdeep's case, would we expect TimeDistributed to return a shape of (batch_size, 50, 768), or (batch_size, 50, 64, 768)?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I'm having the same error with using the data Streaming app on HuggingFace.
transformers
5,237
closed
BART(base) - Finetune Is this a bug ? Or I am doing something wrong?
## I have finetuned my dataset using latest piece of code from master, by cloning and the building. Now when I try to load my finetuned model as shown in the examples of "summarization" (previous commit). `from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig model = BartForConditionalGeneration.from_pretrained('FinetuneOutput/best_tfmr') tokenizer = BartTokenizer.from_pretrained('FinetuneOutput/best_tfmr')` I got the following error : `TypeError: special token mask_token has to be either str or AddedTokenFast but got: <class 'dict'>` Am I missing something, or it is a BUG again ?
06-24-2020 09:47:05
06-24-2020 09:47:05
Can you post the full error message?<|||||>Are you asking for this or more?? ```bash TypeError Traceback (most recent call last) <ipython-input-13-31ed8b67b601> in <module>() 2 # see ``examples/summarization/bart/run_eval.py`` for a longer example 3 model = BartForConditionalGeneration.from_pretrained('FinetuneOutput/best_tfmr') ----> 4 tokenizer = BartTokenizer.from_pretrained('FinetuneOutput/best_tfmr/') 5 model.eval() 5 frames /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in __init__(self, **kwargs) 511 else: 512 raise TypeError( --> 513 "special token {} has to be either str or AddedTokenFast but got: {}".format(key, type(value)) 514 ) 515 TypeError: special token mask_token has to be either str or AddedTokenFast but got: <class 'dict'> ```<|||||>I can't reproduce this on master. Can you work with a stable release version or do you need features that are on master only?<|||||>I am working from your master branch. After commit no #5227 , its not working, previously it was working fine. There was also an example to load from "best_tfmr". But now its missing from [this](https://github.com/huggingface/transformers/tree/master/examples/summarization) readme<|||||>Ok, we are fixing some issues related to tokenizer serialization here: https://github.com/huggingface/transformers/pull/5056 Maybe it will solve your problem as well. Should be merged pretty soon.<|||||>Thanx a lot Sir. After "factory resetting" colab, and build the project from the source again, solved the issue. I am really sorry for the inconvenience and appreciate the time you gave me. Thank a lot. From next time on wards I will keep in mind to "factory reset" colab. :)
transformers
5,236
closed
Model cards for Hate-speech-CNERG models
Made minor updates in previous model cards and added new cards for the newly updated models.
06-24-2020 09:33:39
06-24-2020 09:33:39
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=h1) Report > Merging [#5236](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5e31a98ab70607c820cc2ad358d81916adad0313&el=desc) will **decrease** coverage by `0.36%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5236/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5236 +/- ## ========================================== - Coverage 78.34% 77.97% -0.37% ========================================== Files 138 138 Lines 23841 23841 ========================================== - Hits 18679 18591 -88 - Misses 5162 5250 +88 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (-0.92%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.50% <0.00%> (-0.32%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.15%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=footer). Last update [5e31a98...fb34f21](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,235
closed
Does to T5 Transformer training scale to multiple GPUs?
Hello team, I have a large set of sequence to sequence dataset. Basically, a huge bunch of input text sequences to output text sequences. I want to train a T5 network on this. I have the following specific questions. a. Can I use the sample code here (along with my own code) to train T5 on my data? - https://huggingface.co/transformers/model_doc/t5.html b. Will that automatically scale to multiple GPUs? What if I want to further scale to tens of GPUs across different machines. Does HuggingFace support that? Thanks
06-24-2020 08:54:25
06-24-2020 08:54:25
Hi @abhisheknovoic , you can use T5 on multiple GPU's. Have a look at this community notebook https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb. It uses pytorch-lightning, so its very easy to setup multi-gpu training.See this guide https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html<|||||>@patil-suraj , thanks for the reference. I will take a look at it. Just to confirm, if I use the code as is in the notebook, will it run on multiple GPUs, or do I need to learn a bit about pytorch-lightning and then make some more changes for multi GPU support? Thanks Suraj !<|||||>You won't need to make any changes to the code, you'll just to specify number of gpus when initialising lightning trainer. You can check their multi-gpu docs for more info <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> Hi @abhisheknovoic , you can use T5 on multiple GPU's. Have a look at this community notebook https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb. > > It uses pytorch-lightning, so its very easy to setup multi-gpu training.See this guide https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html Hello, Suraj. Is there any way to to achieve model parallelism using your solution? Or maybe this tool https://towardsdatascience.com/model-parallelism-in-one-line-of-code-352b7de5645a can be applied to your code?<|||||>Hi @patil-suraj I am trying to use your notebook with pytorch lightening for multiple tasks, in this case one have multiple eval dataloaders, and metrics are also multiple, do you have an idea how I can extend your notebook for multiple tasks to handle this? thanks a lot
transformers
5,234
closed
Fix model path
06-24-2020 07:17:39
06-24-2020 07:17:39
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,233
closed
Fix PABEE division by zero error
Fix the `division_by_zero` error when `patience` is set to `0` during inference. https://github.com/JetRunner/PABEE/issues/2
06-24-2020 06:34:18
06-24-2020 06:34:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=h1) Report > Merging [#5233](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9022ef021a56db975d25c7108cbd19d0dd399174&el=desc) will **increase** coverage by `1.27%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5233/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5233 +/- ## ========================================== + Coverage 77.08% 78.36% +1.27% ========================================== Files 138 138 Lines 23841 23841 ========================================== + Hits 18379 18683 +304 + Misses 5462 5158 -304 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (+0.29%)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.82% <0.00%> (+0.31%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (+0.91%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.92% <0.00%> (+75.00%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=footer). Last update [9022ef0...3d20fa2](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,232
closed
BertTokenizerFast.convert_tokens_to_string converts ids to string, not tokens to string
# 🐛 Bug The `BertTokenizerFast.convert_tokens_to_string` function expects a list of integers instead of a list of strings as the function implies. This does not happen for the normal `BertTokenizer`. The [BertTokenizerFast](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_bert.py#L550) does not override `convert_tokens_to_string` as it is defined in [tokenization_utils_fast.py](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_fast.py#L206), which causes this issue. Within `tokenization_utils_fast.py`, the `convert_tokens_to_string` function calls `self._tokenizer.decode` which expects ids (integers not strings). This issue does not arise when using the normal BertTokenizer because that class overrides `convert_tokens_to_string` as can be seen [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_bert.py#L230). However, the implementation in [tokenization_utils.py](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L839) is incorrect according to the docstring. The function should return `" ".join(tokens)` by default and the call to `convert_ids_to_tokens` should be removed because that function accepts ids not tokens. ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ``` from transformers import BertTokenizerFast, BertTokenizer # Error tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") tokens = tokenizer.tokenize("This is a sentence.") print(tokens) output = tokenizer.convert_tokens_to_string(tokens) # No Error because `convert_tokens_to_string` overridden tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") tokens = tokenizer.tokenize("This is a sentence.") print(tokens) output = tokenizer.convert_tokens_to_string(tokens) ``` Output: ``` ['this', 'is', 'a', 'sentence', '.'] Traceback (most recent call last): File "test.py", line 7, in <module> output = tokenizer.convert_tokens_to_string(tokens) File "/home/user/anaconda3/envs/testing/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 209, in convert_tokens_to_string return self._tokenizer.decode(tokens, skip_special_tokens=skip_special_tokens) File "/home/user/anaconda3/envs/testing/lib/python3.8/site-packages/tokenizers/implementations/base_tokenizer.py", line 267, in decode return self._tokenizer.decode(ids, skip_special_tokens=skip_special_tokens) TypeError: 'str' object cannot be interpreted as an integer ``` ## Expected behavior The `BertTokenizerFast.convert_tokens_to_string` function converts a list of tokens (which are strings) to a single string. ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cu101 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
06-24-2020 04:53:02
06-24-2020 04:53:02
You're right, this method is actually not provided on the Fast tokenizers and wrongly linked to the `decode()` method. We should remove it in the short-term. Do you need it for a specific workflow?<|||||>I need to decode a sequence of input ids to a string. However, I cannot use `tokenizer.batch_decode` because I would like to remove all special tokens except for the [SEP] token, which I want to replace with a token that is not in the tokenizer's vocabulary (so I cannot change the input ids before decoding). To do this I modify the functionality of `tokenizer.convert_ids_to_tokens` to create my modified list of tokens, then I run `tokenizer.convert_tokens_to_string` and `tokenizer.clean_up_tokenization` to create my final sequence.<|||||>I see. Can you add your special token at the end of the vocabulary without updating the model inputs and then just replace the SEP token by your new token id prior to decoding? ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') token.add_tokens('[MY_NEW_TOKEN]') new_token_id = tokenizer.convert_tokens_to_ids('[MY_NEW_TOKEN]') inputs = tokenizer.encode("hello how are you") inputs = [new_token_id if tok == tokenizer.sep_token_id else tok for tok in inputs] decoded_outputs = tokenizer.decode(inputs) ```<|||||>> I see. > > Can you add your special token at the end of the vocabulary without updating the model inputs and then just replace the SEP token by your new token id prior to decoding? > > ```python > from transformers import AutoTokenizer > > tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') > token.add_tokens('[MY_NEW_TOKEN]') > new_token_id = tokenizer.convert_tokens_to_ids('[MY_NEW_TOKEN]') > > inputs = tokenizer.encode("hello how are you") > inputs = [new_token_id if tok == tokenizer.sep_token_id else tok for tok in inputs] > decoded_outputs = tokenizer.decode(inputs) > ``` Using this example works around this problem and simplifies my code. Thanks.
transformers
5,231
closed
BertAbs run_summarization.py example fails with errors
# 🐛 Bug ## Information Attempting to use BertAbs with the official example script for summarization: https://github.com/huggingface/transformers/tree/master/examples/summarization/bertabs#summarize-any-text The language I am attempting to summarize for is English. Simply attempting to run a command like ```python run_summarization.py --documents_dir ../../../../test-summaries/ --no_cuda true --min_length 50 --max_length 200 --alpha 0.95``` fails with the following error: ``` Traceback (most recent call last): File "run_summarization.py", line 15, in <module> from .utils_summarization import ( ModuleNotFoundError: No module named '__main__.utils_summarization'; '__main__' is not a package ``` I thought the import line was strange and changed it to `from utils_summarization import (` (note that I removed the `.` which preceded `utils_summarization`. This seemed to fix the error, although I am unsure if it is the correct fix. Nevertheless, even with this temporary fix that I made, the `run_summarization.py` script fails with the following error: ``` INFO:filelock:Lock 140401652398456 acquired on /home/nikhil/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084.lock INFO:transformers.file_utils:https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache or force_download set to True, downloading to /home/nikhil/.cache/torch/transformers/tmpjgcj6x3w Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 232k/232k [00:00<00:00, 919kB/s] INFO:transformers.file_utils:storing https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt in cache at /home/nikhil/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 INFO:transformers.file_utils:creating metadata file for /home/nikhil/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 INFO:filelock:Lock 140401652398456 released on /home/nikhil/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084.lock INFO:transformers.tokenization_utils_base:loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/nikhil/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 Traceback (most recent call last): File "/home/nikhil/.pyenv/versions/huggingface/lib/python3.6/site-packages/transformers/configuration_utils.py", line 243, in get_config_dict raise EnvironmentError OSError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "run_summarization.py", line 324, in <module> main() File "run_summarization.py", line 309, in main evaluate(args) File "run_summarization.py", line 33, in evaluate model = BertAbs.from_pretrained("bertabs-finetuned-cnndm") File "/home/nikhil/.pyenv/versions/huggingface/lib/python3.6/site-packages/transformers/modeling_utils.py", line 602, in from_pretrained **kwargs, File "/home/nikhil/.pyenv/versions/huggingface/lib/python3.6/site-packages/transformers/configuration_utils.py", line 201, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/nikhil/.pyenv/versions/huggingface/lib/python3.6/site-packages/transformers/configuration_utils.py", line 252, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for 'bertabs-finetuned-cnndm'. Make sure that: - 'bertabs-finetuned-cnndm' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bertabs-finetuned-cnndm' is the correct path to a directory containing a config.json file ``` Based on the error message, I looked up `bertabs-finetuned-cnndm` on https://huggingface.co/models to find that there is no exact match for this model name. The closest match is called `remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization`. Should the script be updated to include this model name instead? ## Environment info Output of `transformers-cli env`: ``` - `transformers` version: 2.11.0 - Platform: Linux-4.4.0-18362-Microsoft-x86_64-with-debian-bullseye-sid - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cpu (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ```
06-24-2020 04:20:21
06-24-2020 04:20:21
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,230
closed
Fix convert_graph_to_onnx script
- Remove all references to `args` in methods, using arguments instead. This lets us use the `convert` method directly by importing it in another script. - Check that the wanted framework is installed before creating the pipeline, otherwise, it might fail to instantiate it.
06-24-2020 00:28:26
06-24-2020 00:28:26
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=h1) Report > Merging [#5230](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9022ef021a56db975d25c7108cbd19d0dd399174&el=desc) will **increase** coverage by `0.89%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5230/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5230 +/- ## ========================================== + Coverage 77.08% 77.98% +0.89% ========================================== Files 138 138 Lines 23841 23841 ========================================== + Hits 18379 18592 +213 + Misses 5462 5249 -213 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5230/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (+0.29%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5230/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5230/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.92% <0.00%> (+75.00%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=footer). Last update [9022ef0...833fb6d](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,229
closed
Cleaning TensorFlow models
While writing docstrings for #5036, I stumbled upon a few bugs in TensorFlow, especially related to the loss computation. I'm patching them in this PR. Here's the list of the bugs solved: ### Loss computation - The loss is computed differently than it is with the PyTorch models. Here it is returned example-wise (therefore with a shape of `(batch_size,)`, whereas the PyTorch models return the loss as a scalar. @jplu is there a reason for this implementation? - The TensorFlow models should be able to handle three types of scenarios: keyword arguments, dictionary, and tuple/list. Right now the `labels` can only be passed through the keyword argument. This PR changes that, and adds a test. ### Missing models in the test files A few models were implemented but were not tested. Some of these models were not working as expected, therefore they've been updated. - TF DistilBERT for multiple choice (added test and patched) - TF DistilBERT for token classification - TF Electra for QA - TF RoBERTa for multiple choice - TF XLNet for multiple choice (added test and patched) ### Misc - Most of the QA models had `is_impossible`, `cls_index` and `p_mask` in their signature while not making use of them. These have been removed. **Users relying on the order of arguments in the signature will be affected by this** - The `labels` were generally placed before the `output_attentions` and `output_hidden_states` that have recently been added to the models. This resulted in an error in the documentation as the `labels` (part of the head model) were added after the `output_attentions` and `output_hidden_states` (part of the base model). The arguments have been re-ordered to once again respect the order of `**base_arguments, **head_arguments`
06-24-2020 00:22:24
06-24-2020 00:22:24
Thanks @LysandreJik ! > The loss is computed differently than it is with the PyTorch models. Here it is returned example-wise (therefore with a shape of (batch_size,), whereas the PyTorch models return the loss as a scalar. @jplu is there a reason for this implementation? Because it is the generic approach to use as some of the other reductions are not compliant with custom training loop. For example I see that you have used `SUM_OVER_BATCH_SIZE` instead of `None` but this removes the compatibility with custom training loops like we have in the trainer, see the [doc](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction). Then can you undo this part please. I do the reduction then directly in the trainer and not in the model, but we can do the reduction manually inside either the loss functions, or the `call` methods as you wish :) > The TensorFlow models should be able to handle three types of scenarios: keyword arguments, dictionary, and tuple/list. Right now the labels can only be passed through the keyword argument. This PR changes that, and adds a test. Good catch, thanks for having fixed this! > Most of the QA models had is_impossible, cls_index and p_mask in their signature while not making use of them. These have been removed. Users relying on the order of arguments in the signature will be affected by this Ok, I didn't know, when I reworked the TF models, I mostly took examples on the list of parameters from the PT part at a time T, I should have been more carefull on later changes. Sorry. > The labels were generally placed before the output_attentions and output_hidden_states that have recently been added to the models. This resulted in an error in the documentation as the labels (part of the head model) were added after the output_attentions and output_hidden_states (part of the base model). The arguments have been re-ordered to once again respect the order of **base_arguments, **head_arguments Thanks!! Like previously I should have been more careful on recent changes. My bad.<|||||>Okay, thanks for the review @jplu, I'll revert that part.<|||||>I'll update the documentation in the next PR<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=h1) Report > Merging [#5229](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c01480bba3b2f0bd8516679476235f4701c21b3b&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `89.06%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5229/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5229 +/- ## ======================================== Coverage 77.98% 77.99% ======================================== Files 138 138 Lines 23839 24014 +175 ======================================== + Hits 18592 18729 +137 - Misses 5247 5285 +38 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.63% <51.85%> (+0.40%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.29% <71.42%> (+4.83%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `92.82% <86.66%> (+18.33%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.76% <100.00%> (+3.96%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.74% <100.00%> (+16.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.72% <100.00%> (+3.52%)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `77.90% <100.00%> (+2.18%)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `91.77% <100.00%> (+11.92%)` | :arrow_up: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `25.00% <0.00%> (-73.34%)` | :arrow_down: | | [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: | | ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=footer). Last update [c01480b...15321a4](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>That's awesome, thanks @LysandreJik !
transformers
5,228
closed
Embedding index out of range in self
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bert cased (size=768) Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) When I feed the ids converted by BERTtokenizer to BERT embedding layer, it shows that the dimension does not match. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) task: text binary classification. dataset: ICLR2020 peer reviews ## To reproduce Steps to reproduce the behavior: 1. use BERT model and BERT tokenizer 2. convert the text of any datasets to ids 3. feed to the BERT model <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The dimension should match. Actually, the code works two days ago. I did not change anything and today it does not work. The error is: return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) and index out of range in self. ## Environment info 2020-06-23 23:18:22.738569: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/transformers/commands/env.py:36: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2020-06-23 23:18:24.794569: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-06-23 23:18:24.849719: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2000160000 Hz 2020-06-23 23:18:24.850210: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x42a8bc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-06-23 23:18:24.850259: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-06-23 23:18:24.857345: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-06-23 23:18:24.860619: E tensorflow/stream_executor/cuda/cuda_driver.cc:313] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2020-06-23 23:18:24.860659: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (68cbbf79e491): /proc/driver/nvidia/version does not exist Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 2.11.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cu101 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>
06-23-2020 23:20:58
06-23-2020 23:20:58
Hi, Can you share a self-contained code example reproducing the bug?<|||||>> Hi, > Can you share a self-contained code example reproducing the bug? Sorry, I think it is due to a bug in my code. Please close it.<|||||>@zht1130 Were you able to identify the bug? I'm seeing a similar error.<|||||>> @zht1130 Were you able to identify the bug? I'm seeing a similar error. 1. Run the model on the CPU instead of GPU will give you more detailed information. 2. BERT layer only receives token ids whose length smaller than 512.
transformers
5,227
closed
[pl_examples] revert deletion of optimizer_step
using default `optimizer_step` has at least 2 issues: The default version... 1) doesn't call `lr_scheduler.step()` 2) does call `self.trainer.scaler.step(optimizer)` I haven't diagnosed which of these is the main culprit of the issue I was seeing (very high loss, not going down). This fixes that issue. I suspect it is mostly the latter: @williamFalcon
06-23-2020 20:20:43
06-23-2020 20:20:43
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=h1) Report > Merging [#5227](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c01480bba3b2f0bd8516679476235f4701c21b3b&el=desc) will **increase** coverage by `0.06%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5227/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5227 +/- ## ========================================== + Coverage 77.98% 78.05% +0.06% ========================================== Files 138 138 Lines 23839 23839 ========================================== + Hits 18592 18608 +16 + Misses 5247 5231 -16 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5227/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5227/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.30%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5227/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.33% <0.00%> (-0.24%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5227/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `35.03% <0.00%> (+6.36%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=footer). Last update [c01480b...51bb9b3](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,226
closed
Self documenting Payload instead of Tuples as output of Transformer
I propose a replacing the default Tuple outputs with payloads where every field in the tuple is accessible by a name. This has the following benefits: 1. Being more accessible to newcomers of the library 2. Eliminating suspicious comments in the source code describing the output (see below) 3. Make it easier to extract particular values from different models for inspection -- adding a named field is easier than mangling the order of an existing Tuple 4. Like 3, where new models can output their own metadata without reordering expected structure 5. A consistent output structure makes it easier to compare different models in applications that involve multiple architectures. ## Motivation This will be my first formal feature request, as I am getting weary of forgetting what the Tuple output in the forward pass of a Transformer means. I find myself constantly returning to the source code for each model and playing with the shapes and values of the different fields to check whether something is what I expect it to be. ### The Problem Currently, the output of a Transformer (I use Bert as an example here, potentially not the most recent version) is structured and documented in the source code as follows: ``` python ... for i, layer_module in enumerate(self.layer): if self.output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) layer_outputs = layer_module( hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask ) hidden_states = layer_outputs[0] if self.output_attentions: all_attentions = all_attentions + (layer_outputs[1],) # Add last layer if self.output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) outputs = (hidden_states,) if self.output_hidden_states: outputs = outputs + (all_hidden_states,) if self.output_attentions: outputs = outputs + (all_attentions,) if self.output_additional_info: outputs = outputs + (all_additional_info,) return outputs # last-layer hidden state, (all hidden states), (all attentions) ``` Of course, that last comment is highly dependent on what you pass to the configuration. What if you desire all the attentions but not the hidden_states? Now, attention is at `outputs[1]` instead of `outputs[2]`. Utterly confusing. And what if you have an architecture that has different outputs? Here is an example from the output of a T5 model. ``` python ... # Add last layer if self.output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) outputs = (hidden_states,) if use_cache is True: assert self.is_decoder, "`use_cache` can only be set to `True` if {} is used as a decoder".format(self) outputs = outputs + (present_key_value_states,) if self.output_hidden_states: outputs = outputs + (all_hidden_states,) if self.output_attentions: outputs = outputs + (all_attentions,) return outputs # last-layer hidden state, (presents,) (all hidden states), (all attentions) ``` There's this new field `presents` that again confuses the order. It starts to get a bit confusing. In addition, one of my projects the past many months has been to visually compare and interpret different Transformer models [exbert](http://exbert.net/). This means that I often want to edit the source code and extract, for example, the keys / values / head embeddings prior to the final projection into the embeddings that are passed to the next layer. Extracting these and passing them through the model is more complicated than it should be -- there are no hooks that I can use to catch arbitrary information within a module's forward pass (potentially a separate feature request, but I feel this would slow the prototyping speed of this library quite a bit), and I worry about messing with an expected order to the Tuple output. It is also really easy to forget which field in a list of 8 items is the one I want. Architectures are also incorporated quickly into Transformers (kudos!), and it would be great to know what inference information I have available for a model simply by looking at the object outputted by the forward pass. ### Possible Solutions I would like the return object of every Transformer's forward pass to be a Payload where the information outputted is easily identified by the fields. E.g., for a LMHead Transformer: ``` python { logits: __, past: __, hidden_states: __, attentions: __, } ``` Where a non-LMHead Transformer would not include logits and offset indexing into the output. This also allows models like T5 to unambiguously add additional fields without compromising the structure of the tuple. ``` python { last_layer_hidden_state: __, presents: __, } ``` It would be trivial to add fields to this at the output: ``` python output = {"last_hidden_state": hidden_states} if self.output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) outputs['all_hidden_states'] = all_hidden_states if self.output_attentions: outputs['attentions'] = all_attentions return outputs # Self documenting fields, no comment needed! :D ``` Payloads like this would also work for intermediate modules, though some naming convention to indicate the main output intended to be used by the next module/layer would be necessary. Python's `namedtuples` could also be an option, though easily adding fields to this immutable structure is a bit more challenging for interpretability work. Additional alternatives could be [`namedlist`](https://pypi.org/project/namedlist/) or [`recordclass`](https://pypi.org/project/recordclass/). ## Contribution I am willing to continue thinking of solutions and work towards this goal, but making a single PR would be both a sweeping change across library and the way each module is coded. However, I believe this change to be important for increasing the accessibility of the library for both newcomers and those who want to build applications around the different models.
06-23-2020 20:14:26
06-23-2020 20:14:26
I agree with this change but what about backward compat? I think, as you said, `namedtuple` could be used.<|||||>I think the `namedtuple` would be best for both getting the desired features and also backward compatibility. But to make it backward compatible, wouldn't we just need any object that supports indexing (edit: and unpacking)? What other special features of a tuple are important to maintain for backwards compatibility?<|||||>Unfortunately, `torch.jit` does not support dict, namedtuples or other kinds of fancy outputs... just plain old tuples. See [this issue](https://github.com/pytorch/pytorch/issues/373440) for instance. Dict support has been added recently, so we could consider switching to that with a breaking change once it lands in a stable release of PyTorch, but this would pin us on PyTorch 1.6.0 minimum... Not sure the benefit of this for documentation would be worth it when every output of every model is cleanly explained in its documentation.<|||||>I would argue that, despite clean documentation, there is a huge advantage to having self documenting payloads. Playing around with the outputs of the results in a jupyter notebook will give you auto completion of the fields, remove any ambiguity, and (if you are trying to develop an application that can interpret the output of as many different transformer models as possible) remove the need to look up the output format for every model and version of a model (e.g., `LMHead` or `NextSentencePrediction`). I suppose if we wanted to avoid breaking changes with JIT, we can allow each model to have an optional parameter that enables annotation of the output or not. I believe we would find that it would ideally be enabled by default for a smoother user experience, and then any decorator that JITs a code could disable the annotation in favor of a regular tuple. But even if it is a flag that we have to manually set it would still enable better applications needing to support different models. Thoughts then on making it an optional flag?<|||||>Yes @julien-c also gave me the idea of the flag, hadn't thought of it. You can check the PR linked above for a prototype of doing this while not breaking any backward compatibility.<|||||>Skimmed through the changes and it looks very nice! Would love to see something like this for all the models. Thanks so much for doing this!<|||||>I think this is now closed by $5438 .
transformers
5,225
closed
Add hugs
Enforce that there are not transformers, Transformers, `transformers` but only 🤗 Transformers in the documentation.
06-23-2020 19:21:37
06-23-2020 19:21:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=h1) Report > Merging [#5225](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c01480bba3b2f0bd8516679476235f4701c21b3b&el=desc) will **increase** coverage by `0.35%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5225/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5225 +/- ## ========================================== + Coverage 77.98% 78.34% +0.35% ========================================== Files 138 138 Lines 23839 23839 ========================================== + Hits 18592 18676 +84 + Misses 5247 5163 -84 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.27% <0.00%> (-0.89%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.82% <0.00%> (+0.31%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (+0.91%)` | :arrow_up: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=footer). Last update [c01480b...1aa097c](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,224
closed
Use the script in utils
Since we have the script `download_glue_data` in the utils folder, changing the instructions in the README for the GLUE example to use it for now (of course nlp will ultimately make this even easier) since it's easier than copying the gist in a local file.
06-23-2020 18:40:25
06-23-2020 18:40:25
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=h1) Report > Merging [#5224](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c01480bba3b2f0bd8516679476235f4701c21b3b&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5224/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5224 +/- ## ========================================== - Coverage 77.98% 77.98% -0.01% ========================================== Files 138 138 Lines 23839 23839 ========================================== - Hits 18592 18590 -2 - Misses 5247 5249 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.30%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=footer). Last update [c01480b...45e8866](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,223
closed
Only put tensors on a device
Fix Trainer when users have inputs containing non-tensor values.
06-23-2020 18:28:37
06-23-2020 18:28:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=h1) Report > Merging [#5223](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c01480bba3b2f0bd8516679476235f4701c21b3b&el=desc) will **decrease** coverage by `0.03%`. > The diff coverage is `60.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5223/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5223 +/- ## ========================================== - Coverage 77.98% 77.95% -0.04% ========================================== Files 138 138 Lines 23839 23841 +2 ========================================== - Hits 18592 18586 -6 - Misses 5247 5255 +8 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5223/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <60.00%> (+0.04%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5223/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5223/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5223/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.30%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=footer). Last update [c01480b...10ff478](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,222
closed
Add version control menu
This PR adds at the top of the navigation bar a menu to pick a version of the docs. A few comments: When switching version, the reader is sent on the same page of the docs in the older version (so it gives an error if the same page did not exist in this version of the docs). I don't know if this is preferable to the alternative (sending back to the index of the other version in the docs). Let me know what you think. The menu will disappear once the reader goes to an older version of the docs (since it did not exist back then) unless we find a way to cherry pick in each release (but I doubt it's worth it). Preview is [here](https://51918-155220641-gh.circle-artifacts.com/0/docs/_build/html/index.html). Side note, on a local build (like this one), the version appears as 'html', but it will be the right one once merged. Also, the preview/local build only has one version, so the links don't work there.
06-23-2020 18:12:16
06-23-2020 18:12:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=h1) Report > Merging [#5222](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c439752482759c94784e11a87dcbf08ce69dccf3&el=desc) will **decrease** coverage by `0.10%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5222/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5222 +/- ## ========================================== - Coverage 78.07% 77.97% -0.11% ========================================== Files 138 138 Lines 23786 23786 ========================================== - Hits 18572 18547 -25 - Misses 5214 5239 +25 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5222/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5222/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5222/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5222/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.61% <0.00%> (-0.20%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5222/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.15%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=footer). Last update [c439752...12b85f4](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>nice. In terms of UI you could also just have used a `<select>` element (maybe slightly more explicit UI) but I guess this works too
transformers
5,221
closed
gpt2.generate breaks on FP16 Apex training.
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT2 Language I am using the model on (English, Chinese ...): Molecule The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Run Pytorch lightning with Apex (or just default apex training), then during validation try and generate samples with the model (which is on fp16). ## Trace ``` 2020-06-23T18:30:00.337+01:00 | THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1579022034529/work/aten/src/THC/THCReduceAll.cuh line=327 error=716 : misaligned address -- | --   | 2020-06-23T18:30:00.338+01:00 | Validation sanity check: 0it [00:00, ?it/s] Validation sanity check: 50% 1/2 [00:01<00:01, 1.08s/it]generating smiles   | 2020-06-23T18:30:00.341+01:00 | Traceback (most recent call last):   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/bin/transformervae", line 11, in <module>   | 2020-06-23T18:30:00.341+01:00 | load_entry_point('exs-transformervae', 'console_scripts', 'transformervae')()   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/click/core.py", line 829, in __call__   | 2020-06-23T18:30:00.341+01:00 | return self.main(*args, **kwargs)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/click/core.py", line 782, in main   | 2020-06-23T18:30:00.341+01:00 | rv = self.invoke(ctx)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/click/core.py", line 1259, in invoke   | 2020-06-23T18:30:00.341+01:00 | return _process_result(sub_ctx.command.invoke(sub_ctx))   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/click/core.py", line 1066, in invoke   | 2020-06-23T18:30:00.341+01:00 | return ctx.invoke(self.callback, **ctx.params)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/click/core.py", line 610, in invoke   | 2020-06-23T18:30:00.341+01:00 | return callback(*args, **kwargs)   | 2020-06-23T18:30:00.341+01:00 | File "/app/transformervae/cli.py", line 404, in pretrain   | 2020-06-23T18:30:00.341+01:00 | trainer.fit(model)   | 2020-06-23T18:30:00.341+01:00 | File "/home/uwandb: Waiting for W&B process to finish, PID 28   | 2020-06-23T18:30:00.341+01:00 | ser/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in fit   | 2020-06-23T18:30:00.341+01:00 | self.single_gpu_train(model)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 176, in single_gpu_train   | 2020-06-23T18:30:00.341+01:00 | self.run_pretrain_routine(model)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1076, in run_pretrain_routine   | 2020-06-23T18:30:00.341+01:00 | False)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 330, in _evaluate   | 2020-06-23T18:30:00.341+01:00 | eval_results = model.validation_epoch_end(outputs)   | 2020-06-23T18:30:00.341+01:00 | File "/app/transformervae/models/base.py", line 95, in validation_epoch_end   | 2020-06-23T18:30:00.341+01:00 | return self._shared_eval_end(output, "val")   | 2020-06-23T18:30:00.341+01:00 | File "/app/transformervae/models/lm.py", line 152, in _shared_eval_end   | 2020-06-23T18:30:00.341+01:00 | use_cache=True,   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad   | 2020-06-23T18:30:00.341+01:00 | return func(*args, **kwargs)   | 2020-06-23T18:30:00.341+01:00 | File "/app/transformervae/models/lm.py", line 173, in generate_no_grad   | 2020-06-23T18:30:00.341+01:00 | generated_ids = self.encoder.generate(input_ids, **kwargs)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad   | 2020-06-23T18:30:00.341+01:00 | return func(*args, **kwargs)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1181, in generate   | 2020-06-23T18:30:00.341+01:00 | model_specific_kwargs=model_specific_kwargs,   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1285, in _generate_no_beam_search   | 2020-06-23T18:30:00.341+01:00 | if unfinished_sents.max() == 0:   | 2020-06-23T18:30:00.341+01:00 | RuntimeError: cuda runtime error (716) : misaligned address at /opt/conda/conda-bld/pytorch_1579022034529/work/aten/src/THC/THCReduceAll.cuh:327 ``` ## Expected behavior Should generate samples. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Ubuntu - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 cu101 - Tensorflow version (GPU?): n/a - Using GPU in script?: V100 - Using distributed or parallel set-up in script?: no but FP16 training
06-23-2020 17:37:18
06-23-2020 17:37:18
This may be an error on my part, apologies. Just confirming.<|||||>This is supposed to work, misaligned address is usually a flavor of OOM. I would try to cut your batch size.<|||||>@sshleifer Batch size is 16, am on a V100 on FP16. Also I'm training with a batch size of 512, so surely batch size is fine in this case?<|||||>Okay, turns out the problem is still there. Here is my code: ``` @torch.no_grad() def generate_no_grad(self, num_samples, batch_size, **kwargs): device = next(self.parameters()).device num_iters = num_samples // batch_size all_smiles = [] for idx in range(num_iters): input_ids = torch.empty(batch_size, 1).fill_(self.tokenizer.bos_token_id) input_ids = input_ids.to(device).long() generated_ids = self.encoder.generate(input_ids, **kwargs) smiles = self.tokenizer.decode(generated_ids) all_smiles.extend(smiles) return all_smiles def generate(self): smiles = self.generate_no_grad( num_samples=50, batch_size=16, max_length=self.collater.max_length, do_sample=True, num_beams=1, temperature=1.0, top_k=500, top_p=1.0, repetition_penalty=1.0, pad_token_id=self.tokenizer.pad_token_id, bos_token_id=self.tokenizer.bos_token_id, eos_token_id=self.tokenizer.eos_token_id, length_penalty=1, no_repeat_ngram_size=0, num_return_sequences=1, use_cache=True, ) .... ``` Bugs out on batch size 16, num_samples=10_000, max_length=100<|||||>@patrickvonplaten Any idea with this? Also, I'm on torch 1.4 and cu101, could upgrading to 1.5.1 and cu102 fix this?<|||||>I cannot reproduce the error in the notebook. Looking into it more. <|||||>Hmmmm, getting the same error on batch size 8 and num samples = 50. here's my colab where I'm trying to reproduce: https://colab.research.google.com/drive/13uvd_Y_VHoZqQxyZ0OdyyNAJEurrW2LX?usp=sharing @sshleifer I'm guessing the CICD checks for torch 1.4 cu101?<|||||>Okay. I can only think that this is a pytorch 1.4 cu101 error. Will update image to 1.5.1, try to tomorrow and update this accordingly. Cheers!<|||||>Is there anything from the code above that would warrant this error?<|||||>Well, well, well @sshleifer I moved to the pytorch nightly conda builds (1.6-dev, which has native AMP) and now the issue is no longer there. Weird ...<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,220
closed
run_language_modeling.py does not output vocab/config/etc files until training completes
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT2 Language I am using the model on (English, Chinese ...): English The problem arises when using: * the official example scripts: I began training using the following command (from Jupyter Lab) - output_100k is the specified (new) output folder for my fine tuned model (which in being trained on 100k US patents): `!python run_language_modeling.py \ --output_dir=output_100k \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --block_size 100 \ --per_device_train_batch_size 3 \ --do_train \ --train_data_file=./train_100k.txt \ --do_eval \ --eval_data_file=./test_100k.txt` This works fine; HOWEVER, no instructions are given in the example [README](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) on how to resume training later on. **Including this information would be extremely helpful**. After some personal research, I deduced (perhaps incorrectly?) that I should point 'model_name_or_path' at the output_100k checkpoint folder like so (note: the README also doesn't specify whether or not it's okay to keep the original output-dir name, so I made a new one): `!python run_language_modeling.py \ --output_dir=output_100k_resumed \ --model_name_or_path=./output_100k/checkpoint-12500 \ --block_size 100 \ --per_device_train_batch_size 3 \ --do_train \ --train_data_file=./train_100k.txt \ --do_eval \ --eval_data_file=./test_100k.txt` This raises the following error: `OSError: Model name './output_100k/checkpoint-12500' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed './output_100k/checkpoint-12500' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.` I checked the directory, and indeed, there are no vocab.json, merges.txt, etc files included in the specified output directory from the initial training. It appears that **run_language_modeling.py** does not output these files until the **end** of training (an examination of other models training using the script shows that these files are present in completed training sessions. **What am I doing wrong?** Or is this a bug in the script itself? I'd imagine these vocab/merges files can be outputted relatively early in training. Note, when training on a new dataset starts, a file with the name of the form `cached_lm_GPT2Tokenizer_[SOME NUMBER]_train_clean.txt.lock` (and one without the .lock) is generated. Perhaps this contains some information that I'd need to resume training? Regardless, I see no documentation explaining this file's purpose linked in the README, which - again - could definitely provide more context. The tasks I am working on is: * my own task or dataset: (give details below) The idea is to build an effective context-aware text generator for legal purposes. The dataset simply consists of a bunch of patent document texts. ## To reproduce Steps to reproduce the behavior: **See description above** ## Expected behavior Model resumes training at the specified checkpoint. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-5.3.0-59-generic-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No modifications to script, but yes, this computer uses GPU. - Using distributed or parallel set-up in script?: No.
06-23-2020 17:05:52
06-23-2020 17:05:52
**NOTE:** Currently retraining. At the 500-it checkpoint save point, the following is outputted: `"loss": 3.162974836349487, "learning_rate": 4.207104345068189e-05, "epoch": 0.47573739295908657, "step": 500} 06/23/2020 10:25:35 - INFO - transformers.trainer - Saving model checkpoint to ./output_100k_run2/checkpoint-500 06/23/2020 10:25:35 - INFO - transformers.configuration_utils - Configuration saved in ./output_100k_run2/checkpoint-500/config.json 06/23/2020 10:25:35 - INFO - transformers.modeling_utils - Model weights saved in ./output_100k_run2/checkpoint-500/pytorch_model.bin /home/b/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler. ` I'm particularly concerned about that last part: `/home/b/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.` Is that not implemented by default? Do I have to pass in a specific argument to fix it? Or is this something that should be ignored? Not sure if this is related to the above bug, but... maybe? **NOTE 2:** The contents of the output folder at this point (500 iterations of training): `checkpoint-500` And the contents of a similar model output dir that I trained fully: `checkpoint-1000 config.json special_tokens_map.json vocab.json checkpoint-1500 merges.txt tokenizer_config.json checkpoint-500 pytorch_model.bin training_args.bin ` Hopefully that highlights the issue - all those supplementary files don't seem to be being saved until the end of training. <|||||>Hi, I'm also having this issue where `run_language_modeling.py` is not creating the files needed to resume the training until it finishes and by then you wont need the files to resume it as it will have finished, also cant generate text until the vocab,tokens,etc are created as those files are needed for the model to work correctly, in my case im using the code from [this repository](https://github.com/itsuncheng/fine-tuning-GPT2) but technically it's the same code found on the [example/language-modeling](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) folder with just some extra information on the readme file. Have you guys found any way to make it work ?<|||||>Looks like in #3921 a similar issue was fixed in https://github.com/huggingface/transformers/commit/c81152600452ad1bec4ab705356788d29a3573ee by adding `tokenizer.save_pretrained(training_args.output_dir)` to the end of the script. Could this be done for every checkpoint, instead of just at the final output step? If not, how are the checkpoints supposed to be used? Thanks!<|||||>> I'm particularly concerned about that last part: > `/home/b/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.` I have the same warning. Sounds bad for me... Can I just ignore it? ```bash UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler. ``` <|||||>This appears to still be an issue. Has anyone found a solution?<|||||>@apteryxlabs Here's a not-so-eloquent short term fix. I'm just forcing the optimizer to load around line 515 of trainer.py. optimizer = torch.load('path/to/optimizer.pt'). Again, not eloquent but it seems to work. Still trying to figure out why the optimizer isn't loading properly earlier on. <|||||>> @apteryxlabs Here's a not-so-eloquent short term fix. I'm just forcing the optimizer to load around line 515 of trainer.py. optimizer = torch.load('path/to/optimizer.pt'). Again, not eloquent but it seems to work. Still trying to figure out why the optimizer isn't loading properly earlier on. Hi, thanks for the hint. Would you please elaborate which line around line 515? I saw an optimiser.pt appearing at line 623, another 463, not sure which one you are referring to. Also, is it just setting the path of this optimizer.pt brutally to the file under the /checkpoint-x directory?<|||||>I am having the same issues: not saving supplementary files at checkpoints and also the warning about loading the optimizer. Has anyone found a solution?<|||||>It seems that supplementary file saving at checkpoints has been fixed (I am now seeing checkpoint saving in finetune). But I am still seeing "Warning: Please also save or load the state of the optimzer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning)" Steps to reproduce: 1) clone transformers into new director 2) cd transformers && pip install .e; cd examples && pip install -r requirements.txt 3) cd seq2seq && ./finetune_t5_bart_tiny.sh Observe that the warning is printed: ../python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: Could not log computational graph since the `model.example_input_array` attribute is not set or `input_array` was not given warnings.warn(*args, **kwargs) .../python3.8/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning) (There is both the optimizer warning and the computational graph logging warning)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,219
closed
[Longformer] Major Refactor
## Longformer Refactor This PR does a major refactoring of Longformer. Mainly, the Roberta abstraction is removed and compositionally is chosen instead. This has the following advantages: - It's easier now to implement a `cross_attention_layer` - The code is more readable and the logic stays in this file only - A bug was corrected regarding the attention mask. @ibeltagy - maybe you can check this as well. Previously, if **no** `attention_mask` was inserted, the padding function that became before `super.forward()` in `LongformerModel` was not used, **but** if instead an `attention_mask = torch.tensor([1, ..., 1])` (attend to all tokens was passed, the padding function was applied and could lead to different outputs as when no `attention_mask` is passed. This should not be the case. `model(input_ids)` and `model(input_ids, attention_mask=torch.ones(input_ids.shape))` should always yield the same result. Removing the `super.forward()` abstraction makes the code much cleaner here so that a `attention_mask = torch.ones(input_ids.shape)` can be calculated before calling the longformer encoder. **IMPORTANT** Since in almost all tasks longformer somehow passes either a `global_attention_mask` or `attention_mask` to `LongformerModel`, this bug did not really become visible before. - We don't have to "inject" a `self-attention layer` into another model anymore, which I did not like very much. - Unnecessary code can be removed (head_mask, prev cross-attention layer inputs that do not work yet), ... **Additionally**: - Variable names are made more explicit and dead code (If statements that would have never occurred) was removed and code is simplified. - The forward function of the self-attention layer is broken up into multiple helper functions. The advantage here is that quite some memory should be saved before `attention_probs` go out of scope after they are not used anymore and thus the memory bottleneck should be reduced. - All longformer models are added to the tests (@sgugger) and a couple more tests are added. Next step is to add cross attention layers to longformer. **Review** I made sure that besides the bug with `attention_mask = None` vs `attention_mask = torch.ones(...)` all outputs stay the same. Would be great if @thomwolf @LysandreJik @sgugger @sshleifer @ibeltagy can do a quick review.
06-23-2020 16:50:28
06-23-2020 16:50:28
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=h1) Report > Merging [#5219](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9a473f1e43221348334b9e7f95bb45770b7ef268&el=desc) will **decrease** coverage by `0.81%`. > The diff coverage is `92.60%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5219/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5219 +/- ## ========================================== - Coverage 77.85% 77.04% -0.82% ========================================== Files 138 138 Lines 24314 24409 +95 ========================================== - Hits 18930 18806 -124 - Misses 5384 5603 +219 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `91.66% <92.60%> (-1.45%)` | :arrow_down: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `73.37% <0.00%> (-25.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.39% <0.00%> (-0.15%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=footer). Last update [9a473f1...90d2aa6](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@sshleifer and @ibeltagy - thanks a lot for your comments -> cleaned up the comments and some function naming. All slow and normal tests pass on GPU => good to merge.
transformers
5,218
closed
AttributeError: module 'tensorflow' has no attribute 'repeat'
I tried to run the pipeline task 'summarization', but get a error with "module **'tensorflow' has no attribute 'repeat' "** Does anyone encountered the same problem? How to fix it? **my installed tensorflow == 2.0.0** error messages: /home/ww/anaconda3/envs/environment_name/lib/python3.6/site-packages/transformers/pipelines.py", line 1446, in __call__ inputs["input_ids"], attention_mask=inputs["attention_mask"], **generate_kwargs, File "/home/ww/anaconda3/envs/environment_name/lib/python3.6/site-packages/transformers/modeling_tf_utils.py", line 747, in generate tf.repeat(tf.expand_dims(tf.range(batch_size), -1), repeats=num_beams * effective_batch_mult, axis=1), AttributeError: module 'tensorflow' has no attribute 'repeat'
06-23-2020 16:20:52
06-23-2020 16:20:52
@LysandreJik @jplu It seems that Tensorflow does not include `repeat` in 2.0 and 2.0.1 (see https://github.com/tensorflow/tensorflow/issues/38839). Perhaps best to have 2.1 as a min requirement?<|||||>Indeed, TF < 2.1 doesn't have the `tf.repeat()` function. Put TF >= 2.1 as min requirement looks to be a good solution to me. @LysandreJik are you ok with this?<|||||>Yeah I checked tensorflow API and upgrade my tf yo 2.2 it works now On 25 Jun 2020, at 4:50 pm, Julien Plu <[email protected]> wrote:  Indeed, TF < 2.1 doesn't have the tf.repeat() function. Put TF >= 2.1 as min requirement looks to be a good solution to me. @LysandreJik<https://github.com/LysandreJik> are you ok with this? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/5218#issuecomment-649272046>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AKTGDOKKTAPDE6SIYCKGTO3RYLXTNANCNFSM4OF2P2IQ>. <|||||>Sure, I'm okay with this!
transformers
5,217
closed
Create README.md
electra_large_discriminator_squad2_512 Question Answering LM
06-23-2020 14:58:27
06-23-2020 14:58:27
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=h1) Report > Merging [#5217](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b28b53713161a6299c757c32f7179a2cb2d8cbd7&el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5217/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5217 +/- ## ========================================== + Coverage 77.96% 77.98% +0.02% ========================================== Files 138 138 Lines 23838 23838 ========================================== + Hits 18585 18590 +5 + Misses 5253 5248 -5 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.15% <0.00%> (+0.14%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=footer). Last update [b28b537...6f847e8](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks! [model page](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512)
transformers
5,216
closed
[WIP - Don't merge yet][Pipeline] Make "task" a static class variable
This PR removes "task" from the "init" of the class and adds it as a static variable. In my opinion, it is cleaner to have "task" as a static variable instead of an object attribute. This would also solve #5210 . The problem we would run into then is that for the "Translation" pipeline we would need a class for each translation. I think this is still the better option though because it is less prone to errors (see #5210) and we could add a "get translation factory design" or something. After some discussion with @mfuntowicz, I think the best option is to add two additional parameters `src_lang` and `tgt_lang` to the pipelines function and delete the task names `translation_en_to_fr` in favor of just `translation`. The new recommended way of instantiating a translation pipeline is ```python translation_en_to_fr = pipeline("translation", src_lang="en", tgt_lang="fr") ``` The option: ```python translation_en_to_fr = pipeline("translation_en_to_fr") ``` is still supported with a future warning. @mfuntowicz @julien-c @LysandreJik @sshleifer What do you think? **Backward Compatibility** It should be fully backward compatible, but I added some warning statements to let the user know of Future Depreciation ### TODO: If this PR is ok for you, I will update the docs and add tests.
06-23-2020 13:46:59
06-23-2020 13:46:59
Just noticed that this would break the Marian translation pipeline though. Maybe for the Translation pipeline we should keep the "task" as an `__init__` argument. <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=h1) Report > Merging [#5216](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1ae132a07d7f294cf58cd50f7db8723d00e282de&el=desc) will **decrease** coverage by `0.36%`. > The diff coverage is `94.11%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5216/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5216 +/- ## ========================================== - Coverage 77.49% 77.12% -0.37% ========================================== Files 138 138 Lines 23787 23815 +28 ========================================== - Hits 18433 18368 -65 - Misses 5354 5447 +93 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `77.21% <94.11%> (+0.80%)` | :arrow_up: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=footer). Last update [1ae132a...e487570](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>LGTM! Thanks @patrickvonplaten 🙏 <|||||>@sshleifer - can you check if this is fine for Marian translation?<|||||>Close this for now -> Pipelines will be updates when working on Pipelines v2 with @mfuntowicz
transformers
5,215
closed
TF2 support for Longformer
# 🚀 Feature request Hi, I'm currently working on a project involving long documents (6000+ tokens). I normally work with Tensorflow and I was wondering if there are any plans for adding a Longformer TF model in the near future? My PyTorch knowledge is fairly limited, but given the potential of the Longformer for my project, I would want to learn the basics if there are no plans for adding TF support. Kind regards
06-23-2020 11:47:16
06-23-2020 11:47:16
Yes, we are planning to add this in ~1 month<|||||>Great, looking forward to it, thanks!<|||||>+1 on this @patrickvonplaten any news? :)<|||||>+2 on this<|||||>Finished by end of the week :-) See https://github.com/huggingface/transformers/pull/5764. It's almost finished :-) <|||||>@patrickvonplaten any reference on how to train unsupervised model for longformer (not fine-tuning)?
transformers
5,214
closed
How to predict on a batch?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**: [Link](https://stackoverflow.com/questions/62533181/huggingface-transformers-library-predict-in-batches)
06-23-2020 11:46:17
06-23-2020 11:46:17
transformers
5,213
closed
Train EncoderDecoder Models for question generation
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> ### How to train models for text generation Hi Everyone, I am trying to finetune an Encoder Decoder model on question generation task on SQuAD. Input data are a concatenation of answer span and context and outputs are the question. `inputs = tokenizer.encode_plus(example.answer, example.context, add_special_tokens=True, max_length=max_length, truncation='only_second')` `label = tokenizer.encode_plus(example.question, add_special_tokens=True, max_length=max_length_label, truncation=True)` `decoder_input_ids, label_ids = data_collator.mask_tokens(torch.tensor(label_ids).unsqueeze(0))` I add padding to all of these arguments if necessary and pass them to the model which can be: - an encoder decoder model: `model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')` `inputs = {'input_ids': batch[0], 'attention_mask': batch[1], 'token_type_ids' : batch[2], 'decoder_input_ids': batch[3], 'lm_labels' : batch[5]}` `outputs = model(**inputs)` - a BART model : `model = BartForConditionalGeneration.from_pretrained(model_name)` `inputs = {'input_ids': batch[0], 'attention_mask' : batch[1], 'decoder_input_ids': batch[2], 'labels' : batch[3] }` I thought that everything was alright and I started training my two models. As the training progressed, the mlm_probability of the datacollator object increased from 0.20 to 0.40 and then to 1. The learning rate and the optimizer are as follows: (lr around 3e-5) `optimizer = AdamW(model.parameters(), lr=learning_rate, eps=adam_epsilon)` `scheduler = get_cosine_with_hard_restarts_schedule_with_warmup(optimizer, num_warmup_steps=warmup_steps, num_training_steps=t_total, num_cycles=num_cycles)` The eval loss was decreasing all along the 100 epochs for the BERT2BERT but it didn't looked like the questions were improving: epoch 50: what country did the french support in libya????? - 2013, 2014?? what country did nasser end to the coup? in 1989, 2007 and 2008 - 2011's what country did the us state have to use a particular prohibition of fuel in its oil? 2007 epoch 100: where was the fisafat for? islamic party in libya and al - farabut movement what did the unfyadi want to end in 1990? - 1991, 2003 and gulf what country did the oil industry stop its fuel and coaling? in a world, which countries The observation remains the same for BART model: 100 steps: ? what was the name of normnormandy in frfrance. ? when did people in the first half of what began to give their ? who were the people that did not to swear fealty oath in 4 epochs: normnormnaandyanye gave given offered name namesNames to forfor normnormNormansons gave given granted their own original initial ancestral native normnormdonaldansons descended originated originate originating from origins origin?ers My questions are: **Do you think that something is wrong with my training? What do you think about the performances? Do you have any suggestions for the question generation task? How does the decoder input ids is supposed to change for Next word prediction loss ? Should I use Next word prediction loss or should I use Masked lm loss ? How to use dropout with pretrained model?** Thank you in advance for your help and I hope that my post will be usefull to others, if need be I can share a bigger part of my code :)
06-23-2020 09:54:21
06-23-2020 09:54:21
Hey @joachim-dublineau , not a direct answer to your question, but here's a relevant discussion thread #4399<|||||>And can you post the code where you prepare the `decoder_input_ids` and `labels`?<|||||>Hi @patil-suraj , Thanks for your quick reply. I have indeed seen this topic previously without finding answers to my points. For the code, I use the datacollator (https://github.com/huggingface/transformers/blob/5f721ad6e48c9d846de25c3fefa0e50a306cbf10/src/transformers/data/data_collator.py) and its function mask_tokens(labels_ids)<|||||>You won't need `mask_tokens`. `mask_tokens` is used for masked language modelling, it masks some tokens in the input, so maybe this why you are seeing the weird output. For bart ``` source_ids, source_mask, y = batch["input_ids"], batch["attention_mask"], batch["decoder_input_ids"] y_ids = y[:, :-1].contiguous() lm_labels = y[:, 1:].clone() lm_labels[y[:, 1:] == pad_token_id] = -100 ``` `input_ids` will be your tokenized context and `decoder_input_ids` will be tokenized question. for enc-dec, you can pass the encoded input to input_ids and encoded question to `decoder_input_ids` and `lm_labels` ``` source_ids, source_mask, y = batch["input_ids"], batch["attention_mask"], batch["decoder_input_ids"] model(input_ids=source_ids, decoder_input_ids=y, lm_labels=y) ``` Hope this is clear<|||||>So I shouldn't use mask_tokens, ok thank you ! What I don't get is that if I provide the question in decoder_input_ids, first the decoder will have the ground truth and then why should I also use the labels argument? And what is y in your first code ?<|||||>>What I don't get is that if I provide the question in decoder_input_ids, first the decoder will have the ground truth and then why should I also use the labels argument? The `EncoderDecoder` model expects the input in this way. Basically it shifts the `lm_labels` or `labels` to the right. @patrickvonplaten is this correct ? `y` is decoder input shifted to the right <|||||>Thank you @patil-suraj ! I will implement this and keep this post updated. <|||||>I tried the same, using EncoderDecoder model for QG that I initialize from bert-base-uncased. The model outputs somewhat readable questions: ``` what team won the 2015 nfl championship? what team did the nfl win in the 2015 super bowl? where was the super bowl held? what team won the 2015 nfl championship? what was the name of the team that was the first to be featured on the nfl network? what was the name of the game that the nfl used to celebrate the 2015 super bowl? when was the super bowl played? ``` However, the BLEU1 score is pretty low around 0.35. I wonder if someone got better results with EncoderDecoder architecture. Otherwise BART will probably be better for the task.<|||||>Hi @volker42maru, What parameters do you use for generation (Repetition Penalty and length penalty)? And for how long did you train your model ? BART seems to be appropriate but I personally have some difficulties making it work.<|||||>For generation I am using: ``` max_length=30, temperature=0.95, num_beams=1, length_penalty=0.25, no_repeat_ngram_size=3 ``` You will get slightly better results with a bigger beam size, but the generation method seems incredibly slow (I wonder why that is?). I trained for 2 epochs on the squad1 train set. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,212
closed
BartConfig wrong decoder_start_token_id?
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bart Language I am using the model on (English, Chinese ...): English ## To reproduce Steps to reproduce the behavior: ``` from transformers import BartConfig, BartTokenizer config = BartConfig.from_pretrained('facebook/bart-large') tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') config.decoder_start_token_id >>> 2 tokenizer.bos_token_id >>> 0 # != config.decoder_start_token_id tokenizer.eos_token_id >>> 2 ``` It is misleading in the documentation of the function ```generate```` *decoder_start_token_id=None – (optional) int If an encoder-decoder model starts decoding with a different token than BOS. Defaults to None and is changed to BOS later.* ## Expected behavior I expect that decoder_start_token_id = tokenizer.bos_token_id, but maybe the model is designed to start decoding with EOS token.
06-23-2020 09:49:50
06-23-2020 09:49:50
Thanks for this issue we should update the documentation here!<|||||>@patrickvonplaten Thanks for the answer! Therefore what is expected from the model? EOS I guess?<|||||>Bart normally has the decoder_input_token_id defined in its config so there shoud be no problem<|||||>Hi, I also wondered about this. `facebook/bart-base` and `facebook/bart-large-mnli` do not have `decoder_start_token_id` defined in their config file so it defaults to 0 (`bos_token_id`), while all the other BART models have it as 2 (`eos_token_id`). Is there any reason for it? In fairseq's implementation looks like it is always `bos`: https://github.com/pytorch/fairseq/blob/5d7ed6ab4f92d20ad10f8f792b8703e260a938ac/fairseq/models/bart/hub_interface.py#L123<|||||>`prefix_tokens` in fairseq is not the same as `config.decoder_start_token_id` It is more like `config.force_bos_token_to_be_generated`, if I remember correctly. For the finetuned summarization versions, I have checked very aggressively and they work much better when `decoder_start_token_id=2`. @FomalhautB Do you have any empirical evidence that bart-base/bart-large are different?<|||||>> `prefix_tokens` in fairseq is not the same as `config.decoder_start_token_id` It is more like `config.force_bos_token_to_be_generated`, if I remember correctly. > > For the finetuned summarization versions, > I have checked very aggressively and they work much better when `decoder_start_token_id=2`. > > @FomalhautB Do you have any empirical evidence that bart-base/bart-large are different? I was training an autoregressive model based on Bart. It works fine until the day the config changed. After training my model for a few iterations, it only generates `<s></s>` and never changed for the iterations after that. I fixed this by forcing `decoder_start_token_id` to be 0. I didn't write anything about the `decoder_start_token_id` before and I didn't change the way that Bart generates text. I am not sure if this is also the case for the original Bart model.<|||||>That's super interesting, thanks for reporting this! Would you mind seeing if leaving `decoder_start_token_id=2`, but adding `force_bos_token_to_be_generated=True` changes anything? I'd also be interested in seeing what a batch of your data looks like during training/what finetuning code you are using if you are willing to share. <|||||>Any updates on this issue? I'm also confused<|||||>Hey @sshleifer, could you take a second look at this issue?<|||||>@patrickvonplaten @sshleifer Hello, any updates? if `labels`'s prefix of `bos` is added automatically by `BartTokenizer`, using `eos` as the first token to start generate seems unreasonable, right? But it seems that it is deliberately designed rather than a bug, why is that? ![image](https://user-images.githubusercontent.com/38466901/126856297-2a305148-ffde-4f79-ba19-2a1159e2f499.png)
transformers
5,211
closed
Remove wandb warning as it is unnecessary
Remove wandb warning as it is unnecessary. If wandb is installed, this throws a warning which just makes noise. Some images might have wandb installed but the user doesn't want to use it. If the user knows what wandb is they will have the API key set anyways.
06-23-2020 09:11:50
06-23-2020 09:11:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=h1) Report > Merging [#5211](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1ae132a07d7f294cf58cd50f7db8723d00e282de&el=desc) will **increase** coverage by `0.49%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5211/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5211 +/- ## ========================================== + Coverage 77.49% 77.98% +0.49% ========================================== Files 138 138 Lines 23787 23786 -1 ========================================== + Hits 18433 18550 +117 + Misses 5354 5236 -118 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `81.81% <ø> (+3.55%)` | :arrow_up: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (+0.29%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.92% <0.00%> (+75.00%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=footer). Last update [1ae132a...ea3c239](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>no strong opinion on that<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Any chance of merging this? :)<|||||>The issue is that people who installed it for automatic logging won't understand why it's not working<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hey @abhishekkrthakur, how to remove this wandb warning ?<|||||>@parthplc I guess by having the API key? 🤔 sorry, i dont use wandb. maybe someone else can help.
transformers
5,210
closed
Increase the default max_length parameter when using TransfoXL & XLnet.
This is needed because the prepended PADDING_TEXT constant is already bigger than the default max_length parameter on the generate method, thus leading to no token being generated. Signed-off-by: Morgan Funtowicz <[email protected]>
06-23-2020 09:11:42
06-23-2020 09:11:42
I don't think it is necessarly needed actually because the `max_length` is overwritten by the model's specific configs in this case. See XLNet config for example: https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json. I would prefer to not set the `max_length` here, but let the user insert it if needed. After discussion with @thomwolf we decided to not hardcode parameter values in the pipelines.py file, but only set them via the `task_specific_params` in the configs. I think the problem is that the task specific params are not overwritten correctly here because the task-name is not correct. It is important the the task-name `text-generation` is given to the pipeline, so that this line works as expected: https://github.com/huggingface/transformers/blob/1ae132a07d7f294cf58cd50f7db8723d00e282de/src/transformers/pipelines.py#L400 <|||||>I tried to explain that here as well:https://github.com/huggingface/transformers/pull/5086#issuecomment-645847015 and think we should actually not pass the task name to the `__init__` function of the pipelines, but change the variable `task` to a static class variable @julien-c @LysandreJik @mfuntowicz @thomwolf <|||||>@patrickvonplaten Ok, thanks for the hints, I'll check the point you mentioned 👌 <|||||>Just a note that in the inference-api we *do* pass the correct "text-generation" task name, so there might be something else going on here.<|||||>Ok I'll check as well<|||||>I think I know what's going on - will do a PR that should fix it<|||||>This line in the inference api should not be called `causal-lm`, but `text-generation` IMO: https://github.com/huggingface/api-inference/blob/9cab899965d164f85c4961f0deafbc5034523e45/shared.py#L47 This way the correct parameters would be loaded from the config. But I would prefer to actually make the "task" name a static variable as is shown here: https://github.com/huggingface/transformers/pull/5216 @julien-c @mfuntowicz <|||||>Oh yes my bad @patrickvonplaten, this was actually on a branch: https://github.com/huggingface/api-inference/pull/3/files
transformers
5,209
closed
[Reformer] Axial Pos Emb Improve mem usage reformer
This PR improves memory usage of Axial Position Encodings by cutting position encodings only to the required length before applying contiguous pytorch operations.
06-23-2020 08:42:41
06-23-2020 08:42:41
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=h1) Report > Merging [#5209](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5209/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5209 +/- ## ======================================= Coverage 77.28% 77.28% ======================================= Files 133 133 Lines 22134 22135 +1 ======================================= + Hits 17107 17108 +1 Misses 5027 5027 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.21% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=footer). Last update [355954f...cd443f7](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,208
closed
Train RobertaModel from scratch for my dataset
I am trying to train RobertaModel from scratch. I am following [this](https://huggingface.co/blog/how-to-train) blog but instead of `model = RobertaForMaskedLM(config=config)`, I am starting with `configuration = RobertaConfig() model = RobertaModel(configuration)` and then continuing with other steps. But I am getting error `TypeError: forward() got an unexpected keyword argument 'labels'`. The whole code piece: ``` configuration = RobertaConfig() model = RobertaModel(configuration) from transformers import LineByLineTextDataset dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="./train.txt", block_size=128, ) from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False, mlm_probability=0.15 ) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./Model1", overwrite_output_dir=True, num_train_epochs=1, per_gpu_train_batch_size=64, save_steps=10_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, ) trainer.train() ``` Is there some other way to do pre-training? Am i missing something here?
06-23-2020 07:34:08
06-23-2020 07:34:08
Are you using transformers from source or from pip install ? `labels` is introduced in a recent commit and available on master, if you are on version 2.11.0 from pip install then use `lm_labels`<|||||>> Are you using transformers from source or from pip install ? > `labels` is introduced in a recent commit and available on master, if you are on version 2.11.0 from pip install then use `lm_labels` @patil-suraj yes I am on version 2.11.0 so where should I use `lm_labels`<|||||>If you installed using pip then yes, use `lm_labels` You'll need to change `DataCollatorForLanguageModeling`, also as you are using `Roberta` which means you training it for maksed language modelling, so you'll need set `mlm `to `True` <|||||>but I am training a RobertaModel not a RobertaMaskedLM model.<|||||>What is your pre-training objective ? Roberta is pre-trained using masked language modelling objective <|||||>i am training it from scratch for my own dataset. I want to use the vectors obtained from last layer for classification task<|||||>You can train `RobertaForMaskedLM` using the `mlm` objective and then load it in `RoberatForSequenceClassification` for classification . `RoberatForSequenceClassification` will take care of taking last layer vector and feeding it to a classification layer.<|||||>yes I have done that now I want to compare it with others RF or GDBTs and extract some features and that's why I want to train RobertModel.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,207
closed
How to build Bimodel to search code snippets? [CodeBERTa]
Hi, I would like build a code search engine model. The main purpose is that when I pass docstring, it should give me top-k associated code snippets as results. I have a data in the form of (docstring, code), which means each docstring is associated with mentioned code snippet. I have seen CodeBERTa fine tune [code](https://huggingface.co/huggingface/CodeBERTa-language-id), but it is not using docstring in it. Is it possible to use this model? Can you please give me some entry points to solve this problem by using hugging-face library?
06-23-2020 07:04:28
06-23-2020 07:04:28
CodeBERTa was indeed trained on just the code so you would need to tweak the approach. Did you read the paper for CodeSearchNet (https://arxiv.org/abs/1909.09436) by @hamelsmu?<|||||>Thanks Julien for your response. I have taken an overview of the paper and its [code](https://github.com/github/CodeSearchNet), and I will try it. But, can it be possible to solve it by using BERT huggingface library? What kind of tweaks do I need to apply in CodeBERT fine tune [code](https://huggingface.co/huggingface/CodeBERTa-language-id)? Can it be solved by finetuning BertForQuestionAnswering [code](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering)?<|||||>Maybe CodeBERT([https://arxiv.org/abs/2002.08155](https://arxiv.org/abs/2002.08155)) is suitable for you.<|||||>This paper is of my high interest.... Is there fine tuning source code for that paper publicly available? or are there any short snippets available which can help in fine-tuning? <|||||>You can visit this link ([https://github.com/microsoft/CodeBERT](https://github.com/microsoft/CodeBERT)) . <|||||>Thanks for sharing. I will check and let you know about related concerns on the shared github repository<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,206
closed
[fix] remove unused import
06-23-2020 03:38:33
06-23-2020 03:38:33
transformers
5,205
closed
[fix] mobilebert had wrong path, causing slow test failure
Also deleted redundant slow test. `test_inference_no_head` covers this completely.
06-23-2020 03:30:32
06-23-2020 03:30:32
transformers
5,204
closed
T5 Model : What is maximum sequence length that can be used with pretrained T5 (3b model) checkpoint?
As the paper described, T5 uses a relative attention mechanism and the answer for this [issue](https://github.com/google-research/text-to-text-transfer-transformer/issues/273) says, T5 can use any sequence length were the only constraint is memory. According to this, can I use T5 to summarize inputs that have more than 512 tokens in a sequence?
06-23-2020 02:36:22
06-23-2020 02:36:22
Yes you can, but you should be aware that memory requirements quadruple when doubling the input sequence length for "normal" self-attention (as in T5). So you will quickly run out of memory. Here a snippet that shows that you can run input ids longer than `config.max_postion_embeddings`. ```python import torch from transformers import T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained("t5-base") model.config.max_position_embeddings # 512 input_ids = torch.tensor([600 * [0]]) # shape (1, 600) model(input_ids, decoder_input_ids=input_ids) # => no error ``` For more memory efficient models, you should take a look at `Reformer` and `Longformer`<|||||>I hope we will soon have these models ready for summarization<|||||>Thanks for the quick help. So basically, the T5 model in hugging face can handled arbitrary sequence length outputs right? So the second line (**model.config.max_position_embeddings**) basically shows the default max input seq length right ? What do you think of the following code (Here I simply modify the tokenizer max_length): ``` model = T5ForConditionalGeneration.from_pretrained('t5-small') tokenizer = T5Tokenizer.from_pretrained('t5-small') t5_prepared_Text = "summarize: "+some_preprocess_text tokenized_text = tokenizer.encode(t5_prepared_Text, max_length=1024,return_tensors="pt") summary_ids = model.generate(tokenized_text, num_beams=4, no_repeat_ngram_size=2, min_length=30, max_length=100, early_stopping=True) ``` <|||||>Hi, I checked two summary outputs of T5, after using 1024 and 512 sequence lengths. I do not see any difference in generated summaries. Any idea for this behavior?<|||||>> Hi, I checked two summary outputs of T5, after using 1024 and 512 sequence lengths. I do not see any difference in generated summaries. Any idea for this behavior? Hi I have the same question. Do you happen to figure out why?<|||||>Hi, Those days I haven't had much of idea on huggiface models. Since we can add any length as the input.. the main parameter should be minimum generation length. Try to change it. <|||||>> Hi, Those days I haven't had much of idea on huggiface models. Since we can add any length as the input.. the main parameter should be minimum generation length. Try to change it. I am still very new to huggiface. I have a pretty long text about 1500 words. The issue I was having is when I set max_length=512 or 1024, they kinda return the same summary. Do you know why?<|||||>I think it is because minimum length is unchanged. Regardless of the input.. algorthm tries to generate a text until it gets the EOS (end of sentence) token. So it is common to get same length summary even if u add few more sentence to the original input. On Mon, Feb 15, 2021, 16:40 mars997 <[email protected]> wrote: > Hi, Those days I haven't had much of idea on huggiface models. Since we > can add any length as the input.. the main parameter should be minimum > generation length. Try to change it. > > I am still very new to huggiface. I have a pretty long text about 1500 > words. The issue I was having is when I set max_length=512 or 1024, they > kinda return the same summary. Do you know why? > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/5204#issuecomment-778917211>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEA4FGXCWKQKTGQML5LWTPLS7CJSLANCNFSM4OFG7QHA> > . > <|||||>Hi, do we have to fine-tune the model when changing the ``model.config.max_position_embeddings``?<|||||>No really, cz T5 uses relative positional embeddings.<|||||>> I think it is because minimum length is unchanged. Regardless of the input.. algorthm tries to generate a text until it gets the EOS (end of sentence) token. So it is common to get same length summary even if u add few more sentence to the original input. > […](#) > On Mon, Feb 15, 2021, 16:40 mars997 ***@***.***> wrote: Hi, Those days I haven't had much of idea on huggiface models. Since we can add any length as the input.. the main parameter should be minimum generation length. Try to change it. I am still very new to huggiface. I have a pretty long text about 1500 words. The issue I was having is when I set max_length=512 or 1024, they kinda return the same summary. Do you know why? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <[#5204 (comment)](https://github.com/huggingface/transformers/issues/5204#issuecomment-778917211)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEA4FGXCWKQKTGQML5LWTPLS7CJSLANCNFSM4OFG7QHA> . Personally, I think there is another reason: First, if you use the off-the-shelf T5-base model to summarize directly (i.e., no fine-tuning), a longer input would result in the same output as the original input. Because the T5-base model was pre-trained with `max_source_length==512`, those tokens exceeding `512 `may not be attended by the T5Attention layer. But after fine-tuning the T5-base model with a longer `max_source_length`, an input with a longer `max_source_length` perhaps gives you a different output than `512`.
transformers
5,203
closed
Can you release the code for Write For Transformer?
I'd like to use my own model, so is this possible?
06-23-2020 02:03:59
06-23-2020 02:03:59
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,202
closed
examples/seq2seq supports translation
- renames `examples/summarization` -> `examples/seq2seq` - finetune.py and run_eval.py support mbart, marian and t5. - task_specific_params are used - if you specify task='translation', then your metric becomes BLEU instead of ROUGE. - improved `README.md` - lots of test coverage - scripts to reproduce distilbart results TODO: - [x] verified distilbart commands replicate posted results. - [x] new xsum shared task URL. - [x] mini models for marian - [x] mbart finetuning unittests. Postponed and made issues for: - [ ] check bleu scores for translation models with run_eval.py
06-23-2020 00:59:58
06-23-2020 00:59:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=h1) Report > Merging [#5202](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76e5af4cfd821c0c610b9927a2d2cd58a02f43e4&el=desc) will **increase** coverage by `2.46%`. > The diff coverage is `50.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5202/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5202 +/- ## ========================================== + Coverage 75.49% 77.96% +2.46% ========================================== Files 138 138 Lines 23839 23846 +7 ========================================== + Hits 17998 18592 +594 + Misses 5841 5254 -587 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `87.50% <50.00%> (-7.63%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.15%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <0.00%> (+0.36%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <0.00%> (+1.42%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+1.44%)` | :arrow_up: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.70% <0.00%> (+1.72%)` | :arrow_up: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <0.00%> (+2.57%)` | :arrow_up: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=footer). Last update [76e5af4...c546c3e](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Gunna merge this to avoid tweeting broken links. Comments welcome, I will probably need to do more cleanup.
transformers
5,201
closed
Linformer
# 🌟 New model addition ## Model description https://arxiv.org/pdf/2006.04768.pdf This model is very simple, it just projects the key tensor into a lower dimensional space (e.g k=128) along the sequence axis, then computes attention (seq_len x k), softmax, and matmul with the value tensor (note, the value tensor must also be projected to dimension k along the length dimension). Could this be added? ## Open source status Pytorch sketch implementation: https://github.com/tatp22/linformer-pytorch
06-22-2020 23:37:30
06-22-2020 23:37:30
Duplicate #4967