repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
6,702
closed
Questions on the date of Wikipedia dumps for pretrained checkpoints (BERT and RoBERTa).
# ❓ Questions & Help Hi, I really appreciate the Huggingface and the fantastic code for pretrained LMs. I am trying to figure out the date of Wikipedia dumps used for pretrained checkpoints. (BERT and RoBERTa) <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I'd like to know the date (month, year) of Wikipedia dumps that were used for the current pretrained checkpoints of BERT-base uncased and RoBERTa-base and large. I am looking for an older version of pretrained checkpoints that were trained on a Wikipedia dump before 2019. If available, is there a way to get the older version of pretrained checkpoints (before 2019)? Thanks! <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
08-25-2020 01:08:49
08-25-2020 01:08:49
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,701
closed
NER GermEval preprocessor not working as documented
## Environment info - `transformers` version: 2.5.1 - Platform: Darwin-18.7.0-x86_64-i386-64bit - Python version: 3.7.7 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @stefan-it ## Information Following https://github.com/huggingface/transformers/blob/master/examples/token-classification/README.md The problem arises when using: python3 scripts/preprocess.py train.txt.tmp $BERT_MODEL $MAX_LENGTH > train.txt python3 scripts/preprocess.py dev.txt.tmp $BERT_MODEL $MAX_LENGTH > dev.txt python3 scripts/preprocess.py test.txt.tmp $BERT_MODEL $MAX_LENGTH > test.txt The commands above produce the following output: /usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) Traceback (most recent call last): File "scripts/preprocess.py", line 13, in <module> max_len -= tokenizer.num_special_tokens_to_add() AttributeError: 'BertTokenizer' object has no attribute 'num_special_tokens_to_add' Thanks!
08-24-2020 22:53:31
08-24-2020 22:53:31
Hi @jdruvini , the script was added in [Transformers 3.0.0](https://github.com/huggingface/transformers/releases/tag/v3.0.0) so unfortuntately it is only working with more recent versions of Transformers. Could you try to update your version to at least 3.0 to give it a try 🤔<|||||>Indeed... My bad, sorry.<|||||>The GermEval 2014 files have been moved and the curl commands in the README do not work. The new location is: https://drive.google.com/drive/folders/1kC0I2UGl2ltrluI9NqDjaQJGw5iliw_J JD
transformers
6,700
closed
Some weights of AlbertModel were not initialized ['albert.embeddings.position_ids']
Hello! There seems to be a problem with the current code to load a pre-trained Albert model. This warning appears in any configuration of the Albert model: `Some weights of AlbertModel were not initialized from the model checkpoint at albert-base-v2 and are newly initialized: ['albert.embeddings.position_ids']` `You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.` I found this happens only when I install it from the source. Models load correctly (without warning) when installing the library with pip.
08-24-2020 22:32:15
08-24-2020 22:32:15
Hello! Which checkpoint are you trying to load? Is it one of your checkpoints or is it one of the checkpoints hosted on the modelhub?<|||||>This is probably because of this PR: https://github.com/huggingface/transformers/pull/5773 , but should not pose a real problem. I guess we just have to add the position ids as "allowed" non-initialized weights.<|||||>@LysandreJik I used model hub checkpoints. I runned the following lines with pytorch: `from transformers import AlbertForPreTraining` `model = AlbertForPreTraining.from_pretrained('albert-base-v2')` <|||||>Hey @vgaraujov, I can reproduce - this PR: #6700 should fix it. Thanks for reporting it :-) <|||||>@patrickvonplaten Thank you for your help! 💯 <|||||>Position_ids seems unnecessary to be saved? Why not use register_buffer with persistent=False<|||||>> Position_ids seems unnecessary to be saved? Why not use register_buffer with persistent=False It's a fantastic suggestion, @ruotianluo! But, alas, It can't be used at the moment, since: 1. this functionality [was added just a few months ago](https://github.com/pytorch/pytorch/pull/37191) (can't require recent `torch`) 2. [it doesn't yet work with torchscript](https://github.com/pytorch/pytorch/issues/45012) See the following solution: https://github.com/huggingface/transformers/pull/7224
transformers
6,699
closed
More tests to Trainer
While doing see, realized there were some problems with the seed (in particular for HP search) so added a few tests of that too.
08-24-2020 20:17:41
08-24-2020 20:17:41
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=h1) Report > Merging [#6699](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6b4c617666fd26646d44d54f0c45dfe1332b12ca?el=desc) will **decrease** coverage by `0.42%`. > The diff coverage is `70.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6699/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6699 +/- ## ========================================== - Coverage 79.44% 79.01% -0.43% ========================================== Files 156 156 Lines 28386 28388 +2 ========================================== - Hits 22551 22432 -119 - Misses 5835 5956 +121 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `53.64% <70.00%> (+2.91%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-12.22%)` | :arrow_down: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.76%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-1.96%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.50%)` | :arrow_up: | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=footer). Last update [6b4c617...f7790fa](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,698
closed
[fixdoc] Add import to pegasus usage doc
08-24-2020 19:46:30
08-24-2020 19:46:30
transformers
6,697
closed
words of overflowing_tokens in function truncate_sequences is not in right order
https://github.com/huggingface/transformers/blob/6b4c617666fd26646d44d54f0c45dfe1332b12ca/src/transformers/tokenization_utils_base.py#L2570 ``` if not overflowing_tokens: window_len = min(len(pair_ids), stride + 1) else: window_len = 1 overflowing_tokens.extend(pair_ids[-window_len:]) pair_ids = pair_ids[:-1] ``` In my understanding, the overflowing_tokens is a sub sequences of the second sequence (pair_ids). But in this code, the order of words in overflowing_tokens is not same as the pair_ids (and also different with the reversed sequence of pair_ids if window_len != 1).
08-24-2020 19:31:12
08-24-2020 19:31:12
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,696
closed
Use separate tqdm progressbars
Currently, we close progress bar and break the inner loops when training is complete which yields to the progress bars indicating one less step than actually done. The user might believe the last step was not done (even if it was) so this PR uses different generators for the loops than the progress bar and manually updates the progress bar (tried to just do manually the last update but tqdm doesn't want to cooperate with that approach).
08-24-2020 18:35:47
08-24-2020 18:35:47
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=h1) Report > Merging [#6696](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6b4c617666fd26646d44d54f0c45dfe1332b12ca?el=desc) will **decrease** coverage by `0.03%`. > The diff coverage is `80.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6696/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6696 +/- ## ========================================== - Coverage 79.44% 79.41% -0.04% ========================================== Files 156 156 Lines 28386 28390 +4 ========================================== - Hits 22551 22545 -6 - Misses 5835 5845 +10 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6696/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `50.88% <80.00%> (+0.15%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6696/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=footer). Last update [6b4c617...96d9c81](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,695
closed
Fix hyperparameter_search doc
Fixes a few typos in the doc.
08-24-2020 17:35:17
08-24-2020 17:35:17
transformers
6,694
closed
Move unused args to kwargs
Those arguments are poped from the kwargs since specific to optuna for now.
08-24-2020 17:13:47
08-24-2020 17:13:47
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=h1) Report > Merging [#6694](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/912a21ec78998a5e35751132c328e7aee8e9f47f?el=desc) will **decrease** coverage by `0.73%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6694/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6694 +/- ## ========================================== - Coverage 79.68% 78.95% -0.74% ========================================== Files 156 156 Lines 28386 28386 ========================================== - Hits 22619 22411 -208 - Misses 5767 5975 +208 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.17% <ø> (-37.57%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.44% <0.00%> (-20.00%)` | :arrow_down: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=footer). Last update [912a21e...cd84cb9](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,693
closed
Longformer finetuning on TPUs IndexError: tuple index out of range
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Google Colab - Python version: 3.6 - PyTorch version (GPU?):1.6.0+cu101 - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes. XLA ### Who can help Longformer/Reformer: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): longformer: allenai/longformer-large-4096 The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: My Model ```python class LongFormerBaseUncased(nn.Module): def __init__(self): super(LongFormerBaseUncased, self).__init__() self.bert = transformers.LongformerModel.from_pretrained( "allenai/longformer-large-4096", gradient_checkpointing=True ) self.bert_drop = nn.Dropout(config.dropout) self.out = nn.Linear(1024, config.output_num_classes) def forward(self, ids, mask): _, o2 = self.bert(ids, attention_mask = mask) bo = self.bert_drop(o2) output = self.out(bo) return output ``` ```python tokenizer = transformers.LongformerTokenizer.from_pretrained( "allenai/longformer-base-4096" ) text = "Very Long text" tokenized = self.tokenizer.tokenize(text) inputs = self.tokenizer.encode_plus( tokenized, is_pretokenized=True, max_length=4096, pad_to_max_length=True, truncation=True, ) ids = inputs["input_ids"] mask = inputs["attention_mask"] ids = ids.to(device, dtype=torch.long) mask = mask.to(device, dtype=torch.long) targets = targets.to(device, dtype=torch.float) #This throws the error outputs = model(ids=ids, mask=mask) ``` Error ``` Exception in device=TPU:0: tuple index out of range Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 228, in _start_fn fn(gindex, *args) File "<ipython-input-14-9a008098ce7f>", line 3, in _mp_fn a = run() File "<ipython-input-12-9c37f47d0144>", line 156, in run train_fn(train_data_loader, model, optimizer, device, scheduler) File "<ipython-input-12-9c37f47d0144>", line 26, in train_fn outputs = model(ids=ids, mask=mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) File "<ipython-input-9-b68f74a484cf>", line 12, in forward _, o2 = self.bert(ids, attention_mask = mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_longformer.py", line 1004, in forward output_hidden_states=output_hidden_states, File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_longformer.py", line 692, in forward create_custom_forward(layer_module), hidden_states, attention_mask, File "/usr/local/lib/python3.6/dist-packages/torch/utils/checkpoint.py", line 163, in checkpoint return CheckpointFunction.apply(function, preserve, *args) File "/usr/local/lib/python3.6/dist-packages/torch/utils/checkpoint.py", line 74, in forward outputs = run_function(*args) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_longformer.py", line 687, in custom_forward return module(*inputs, output_attentions) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_longformer.py", line 658, in forward self_attn_outputs = self.attention(hidden_states, attention_mask, output_attentions=output_attentions,) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) IndexError: tuple index out of range An exception has occurred, use %tb to see the full traceback. ```
08-24-2020 17:12:15
08-24-2020 17:12:15
Hey @wassimseif, sadly neither Longformer nor Reformer works on PyTorch/XLA . There is just too much dynamic tensor reshaping happening. I think @ibeltagy made Longformer work on PyTorch/XLA when respecting certain limitations (only local attention)<|||||>Hey @patrickvonplaten, Understood. Is there some wiki that specifies which model works on XLA & which don't ?<|||||>@wassimseif, running longformer on pytroch-xla is tracked in this issue https://github.com/allenai/longformer/issues/101. I am aiming to make that code available soon, probably this week. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi. Got the same error. Any update on this issue? Thanks!
transformers
6,692
closed
Add "tie_word_embeddings" config param
As discussed in #6628, this PR makes the word embedding tying dependent on a new parameter: `config.tie_word_embeddings` which is set to `True` by default and set to `False` for Reformer. Also, some unnecessary code is removed in Albert and a similar param is deprecated in Transfo-XL. I don't see how this PR could break backwards compatibility as the `tie_word_embeddings` param did not exist before and is set to `True` by default. Thanks a lot for posting the issue @stas00 .
08-24-2020 16:28:56
08-24-2020 16:28:56
`tie_word_embeddings` doesn't sound a good name to me since it may be confused with freezing the word embeddings.<|||||>Maybe `embedding_as_softmax_weights` or something like that?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=h1) Report > Merging [#6692](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/625318f52516b413126be1bb1cb6818231d2eca6?el=desc) will **decrease** coverage by `0.41%`. > The diff coverage is `81.25%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6692/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6692 +/- ## ========================================== - Coverage 79.49% 79.08% -0.42% ========================================== Files 156 156 Lines 28405 28399 -6 ========================================== - Hits 22581 22458 -123 - Misses 5824 5941 +117 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `96.16% <ø> (+0.07%)` | :arrow_up: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.69% <50.00%> (ø)` | | | [src/transformers/configuration\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3RyYW5zZm9feGwucHk=) | `89.09% <60.00%> (-3.22%)` | :arrow_down: | | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.66% <100.00%> (+0.02%)` | :arrow_up: | | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `83.43% <100.00%> (-0.07%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <100.00%> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=footer). Last update [625318f...c723d16](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> `tie_word_embeddings` doesn't sound a good name to me since it may be confused with freezing the word embeddings. I think the name `tie_word_embeddings` is alright because it quite clear that it is a flag to me and it forces the output word embeddings to point to the same graph node where the input word embeddings poitns to for which "tying' is a fitting word IMO. <|||||>> > `tie_word_embeddings` doesn't sound a good name to me since it may be confused with freezing the word embeddings. > > I think the name `tie_word_embeddings` is alright because it quite clear that it is a flag to me and it forces the output word embeddings to point to the same graph node where the input word embeddings poitns to for which "tying' is a fitting word IMO. Fair enough!<|||||>What about the first part of `tie_weights` - the proposed PR, leaves it unmodified: ``` def tie_weights(self): [...] output_embeddings = self.get_output_embeddings() if output_embeddings is not None: self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings()) ``` this needs to be configurable too. The sub-class override with `pass()` in `reformer` removed both calls and not just `self._tie_or_clone_weights`, so this PR changes the behavior of `reformer` which will now tie input and output embeddings. I will need the same original behavior (neither of 2 calls) for fairseq transformer port. I thought the conclusion in https://github.com/huggingface/transformers/issues/6628 was about a config option to activate or not `tie_weights`, but ended up with only one of its internal calls. But, I think this change is good - as it gives more refined control, though also need the first call to be configurable. Perhaps it could be: `config.tie_in_out_embeddings` for the first call? <|||||>> What about the first part of `tie_weights` - the proposed PR, leaves it unmodified: > > ``` > def tie_weights(self): > [...] > output_embeddings = self.get_output_embeddings() > if output_embeddings is not None: > self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings()) > ``` > > this needs to be configurable too. > > The sub-class override with `pass()` in `reformer` removed both calls and not just `self._tie_or_clone_weights`, so this PR changes the behavior of `reformer` which will now tie input and output embeddings. I will need the same original behavior (neither of 2 calls) for fairseq transformer port. > > I thought the conclusion in #6628 was about a config option to activate or not `tie_weights`, but ended up with only one of its internal calls. But, I think this change is good - as it gives more refined control, though also need the first call to be configurable. > > Perhaps it could be: `config.tie_in_out_embeddings` for the first call? Sorry I don't really follow here. Could you explain how this PR breaks backward compatibility for Reformer and what first part of `tie_weights` is not modified? <|||||>Here is a stripped down version of the code: Before this PR: ``` # src/transformers/modeling_utils.py def tie_weights(self): self._tie_or_clone_weights(...) # part 1 self._tie_encoder_decoder_weights(...) # part 2 # src/transformers/modeling_reformer.py def tie_weights(self): pass ``` After this PR: ``` # src/transformers/modeling_utils.py def tie_weights(self): self._tie_or_clone_weights(...) # part 1 if self.config.tie_word_embeddings: # part 2 self._tie_encoder_decoder_weights(...) # src/transformers/modeling_reformer.py ``` I removed all the other option checks to just show the gist of the change. As you can see the first part of `tie_weights`, i.e. `_tie_or_clone_weights`, will now be called by reformer whereas it was not getting called before this PR when it overridden the whole `tie_weights` method. i.e. the first part also needs a config option. Please let me know whether this clarification was useful, @patrickvonplaten. <|||||>> # src/transformers/modeling_utils.py > def tie_weights(self): > self._tie_or_clone_weights(...) # part 1 > if self.config.tie_word_embeddings: # part 2 > self._tie_encoder_decoder_weights(...) > > # src/transformers/modeling_reformer.py I think it should be rather (after this PR): ``` # src/transformers/modeling_utils.py def tie_weights(self): if self.config.tie_word_embeddings: self._tie_or_clone_weights(...) # part 1 self._tie_encoder_decoder_weights(...) # part2 # src/transformers/modeling_reformer.py ```<|||||>Weird. I see what happened. github was showing only part of the code in the diff, not showing the `if self.config.is_encoder_decoder and self.config.tie_encoder_decoder` so I thought the other part will always be run. `tie_encoder_decoder` and `is_encoder_decoder` are False (by default) for reformer therefore the other part won't be run either, so it's still a noop for reformer. And then I mixed up the 2 parts when trying to explain what I thought I saw. I will checkout the whole PR code in the future and not look at the partial picture shown in github. So all is good. Thank you for bearing with me. Thank you for this fix, @patrickvonplaten
transformers
6,691
closed
Lat fix for Ray HP search
08-24-2020 16:07:51
08-24-2020 16:07:51
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=h1) Report > Merging [#6691](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3a7fdd3f5214d1ec494379e7c65b4eb08146ddb0?el=desc) will **increase** coverage by `0.48%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6691/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6691 +/- ## ========================================== + Coverage 78.95% 79.44% +0.48% ========================================== Files 156 156 Lines 28384 28386 +2 ========================================== + Hits 22412 22551 +139 + Misses 5972 5835 -137 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `50.73% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.95%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.50%)` | :arrow_up: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=footer). Last update [3a7fdd3...0081e9d](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,690
closed
Add DPR to models summary
I created a `retrieval-based-models` section for models like DPR.
08-24-2020 15:05:05
08-24-2020 15:05:05
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=h1) Report > Merging [#6690](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f?el=desc) will **decrease** coverage by `0.71%`. > The diff coverage is `78.15%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6690/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6690 +/- ## ========================================== - Coverage 80.37% 79.65% -0.72% ========================================== Files 156 156 Lines 28058 28248 +190 ========================================== - Hits 22552 22502 -50 - Misses 5506 5746 +240 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | | | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `48.91% <0.00%> (-0.18%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <ø> (ø)` | | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <ø> (+0.16%)` | :arrow_up: | | [src/transformers/modeling\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYmFydC5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `95.55% <ø> (ø)` | | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <ø> (ø)` | | | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (ø)` | | | ... and [40 more](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=footer). Last update [16e3894...f62c9fa](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,689
closed
Add tokenizer to Trainer
Not entirely sure about this change as there is a trade-off API complexity/ease of use. This PR adds `tokenizer` as an optional argument to `Trainer` (if this is approved, will do the same for `TFTrainer`, I have a few recent changes to port there but was mainly waiting for @jplu to be back from vacation to make the two APIs on par). The benefit is that: - we can have a smart default `data_collator` that will automatically pad examples if the tokenizer is provided, so the user doesn't have to learn about data_collators for simple examples. - we can save the tokenizer along the model directly inside `Trainer` for the intermediary checkpoints, so it a checkpoint folder can be used directly with our scripts when resuming an interrupted training. As for the bad part, it's just that it adds a new argument to `Trainer`.
08-24-2020 14:48:40
08-24-2020 14:48:40
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=h1) Report > Merging [#6689](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/abc0202194674ae5e241e547f3af34b4226bdc72?el=desc) will **decrease** coverage by `1.73%`. > The diff coverage is `55.55%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6689/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6689 +/- ## ========================================== - Coverage 78.98% 77.24% -1.74% ========================================== Files 156 156 Lines 28398 28405 +7 ========================================== - Hits 22429 21941 -488 - Misses 5969 6464 +495 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `53.66% <55.55%> (-0.13%)` | :arrow_down: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `58.88% <0.00%> (-36.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-29.33%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: | | ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=footer). Last update [abc0202...54feec2](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Nice! I like it. Ok for me to do the same on the TF one :+1:
transformers
6,688
closed
Question Answering demonstrator for contribute model stopped working
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> All if this is run on the huggingface platform for contributed models. Processing of the model in other hosts works correctly, using the versions described below. Was there an upgrade to the deployed transformers demonstration code that breaks the loading of contributed q/a models? - `transformers` version: 2.2.1 - Platform: ubuntu 18.04 - Python version: 3.7.7 - PyTorch version (GPU?): 1.3.1 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @mfuntowicz ## Information Model I am using (Bert, XLNet ...): Contributed model `mfeb/albert-xxlarge-v2-squad2` based on Albert xxlarge v2, pretrained with SQuAD2. The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) * [x] web demonstrator for question answering, using the contributed model The tasks I am working on is: * [x] an official GLUE/SQuAD task: SQuAD 2 * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Visit https://huggingface.co/mfeb/albert-xxlarge-v2-squad2 2. press `compute` button 3. See the following message: ``` Model name 'mfeb/albert-xxlarge-v2-squad2' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'mfeb/albert-xxlarge-v2-squad2' was a path or url to a directory containing vocabulary files named ['spiece.model'], but couldn't find such vocabulary files at this path or url. ``` This seems to imply that the code that is performing the run_squad is not recognizing that the model is one of the contributed models (not one of the recognized, provided models). <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> An answer to the question: `London`
08-24-2020 14:43:09
08-24-2020 14:43:09
Same issue as #6226. We are currently working on a fix, will post an update here.<|||||>Interestingly, this was working last Thursday. #6226 was from 21 days ago.<|||||>@mfebIBM It should be working right now, if you want to give it a try, let us know 👍. Sorry for the inconvenience.<|||||>Yes. Working now. Thanks!<|||||>I'm closing, don't hesitate to reopen if anything goes wrong.
transformers
6,687
closed
Typo fix in longformer documentation
08-24-2020 14:18:34
08-24-2020 14:18:34
Great, thank you :-)
transformers
6,686
closed
Update repo to isort v5
Since isort now works properly with black, we can use the latest version. It also comes with new functionality (hence the lot of diff) mainly: - it can deal with the __init__ - it can deal with import in if/else blocks. This will fix #6681 Also v5 does not use the recursive flag anymore, so I removed it from the make style and make quality commands. For users having an old isort version, this will result in make style/make quality having no impact. It's very likely that users having suggested a PR will need to rebase after this is merged, update isort then run a new `make style`. We can also assist by force-pushing on their branches. Commented on the changes I made manually, all the others come from the new version of isort.
08-24-2020 13:07:08
08-24-2020 13:07:08
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=h1) Report > Merging [#6686](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1a779ad7ecb9e5215b6bd1cfa0153469d37e4274?el=desc) will **decrease** coverage by `0.49%`. > The diff coverage is `92.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6686/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6686 +/- ## ========================================== - Coverage 79.65% 79.16% -0.50% ========================================== Files 156 156 Lines 28250 28254 +4 ========================================== - Hits 22503 22366 -137 - Misses 5747 5888 +141 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0.00% <0.00%> (ø)` | | | [src/transformers/commands/user.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy91c2VyLnB5) | `0.00% <ø> (ø)` | | | [src/transformers/data/test\_generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (ø)` | | | [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `82.13% <ø> (ø)` | | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.68% <ø> (ø)` | | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <ø> (ø)` | | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <ø> (ø)` | | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <ø> (-12.22%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (ø)` | | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <ø> (ø)` | | | ... and [21 more](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=footer). Last update [1a779ad...dbaad9c](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,685
closed
Fixed DataCollatorForLanguageModeling not accepting lists of lists
As discussed on Slack, currently `DataCollatorForLanguageModeling` and `DataCollatorForPermutationLanguageModeling` cannot take in lists of lists, as opposed to `default_data_collator`. This fixes this issue by calling `torch.Tensor` beforehand if a list of lists is detected.
08-24-2020 12:49:20
08-24-2020 12:49:20
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=h1) Report > Merging [#6685](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1a779ad7ecb9e5215b6bd1cfa0153469d37e4274?el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `75.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6685/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6685 +/- ## ========================================== - Coverage 79.65% 79.64% -0.01% ========================================== Files 156 156 Lines 28250 28254 +4 ========================================== + Hits 22503 22504 +1 - Misses 5747 5750 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.70% <75.00%> (-1.21%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=footer). Last update [1a779ad...dd1b689](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,684
closed
missing reference `from model_bertabs import BertAbsSummarizer`
[This line](https://github.com/huggingface/transformers/blob/1a779ad7ecb9e5215b6bd1cfa0153469d37e4274/examples/seq2seq/bertabs/convert_bertabs_original_pytorch_checkpoint.py#L28) reads > from model_bertabs import BertAbsSummarizer yet neither `model_bertabs` not `BertAbsSummarizer` can be found in the repository
08-24-2020 12:28:45
08-24-2020 12:28:45
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,683
closed
Don't reset the dataset type + plug for rm unused columns
This PR avoids resetting the dataset type when removing columns, and also introduces a filed in `TrainingArguments` to disable that behavior (in case the use wants to use some of those fields in an elaborate data collator).
08-24-2020 11:54:16
08-24-2020 11:54:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=h1) Report > Merging [#6683](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1a779ad7ecb9e5215b6bd1cfa0153469d37e4274?el=desc) will **decrease** coverage by `0.04%`. > The diff coverage is `14.28%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6683/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6683 +/- ## ========================================== - Coverage 79.65% 79.60% -0.05% ========================================== Files 156 156 Lines 28250 28256 +6 ========================================== - Hits 22503 22494 -9 - Misses 5747 5762 +15 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.63% <0.00%> (-0.53%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.34% <100.00%> (+0.08%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.51%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=footer). Last update [1a779ad...eea7e8d](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,682
closed
Fix PL token classification examples
This PR fixes the following: - fetches germeval_14 dataset from a new [location](https://drive.google.com/drive/folders/1kC0I2UGl2ltrluI9NqDjaQJGw5iliw_J) where it has been moved recently (cc @stefan-it) - correctly implements `def get_dataloader(self, mode: int, batch_size: int, shuffle: bool = False) -> DataLoader:` from BaseTransformer PL parent class (cc @sshleifer ) I have verified both normal and PL training works as expected. Will add tests as we rework examples to use datasets
08-24-2020 11:47:43
08-24-2020 11:47:43
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=h1) Report > Merging [#6682](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d0e42a7bed3de9271ae39c575d7eeb54cf985921?el=desc) will **increase** coverage by `0.51%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6682/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6682 +/- ## ========================================== + Coverage 79.14% 79.66% +0.51% ========================================== Files 156 156 Lines 28248 28248 ========================================== + Hits 22358 22503 +145 + Misses 5890 5745 -145 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.95%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+3.75%)` | :arrow_up: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=footer). Last update [d0e42a7...2f843e6](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@stefan-it At the bottom of the germeval [page](https://sites.google.com/site/germeval2014ner/data?authuser=0) I found the new location for the datasets (see Downloads section)<|||||>would love tests!<|||||>I will start working on them @sshleifer<|||||>I updated the urls a while ago in this PR https://github.com/huggingface/transformers/pull/6571 😅<|||||>> I updated the urls a while ago in this PR #6571 😅 Apologies @stefan-it, I didn't know about it. There was an error in PL version of the training so I thought why not fix the dataset URL as well. I am following you now so I'll know more about your PRs. Will you fix the NLP dataset germeval_14 as well?<|||||>@vblagoje Thanks for this!! please reach out to me on [Lightning's Slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A) (username same as here on github) or email me: `nate [at] pytorchlightning.ai`. I'm about to make updates across examples this week and would love to sync up with you on this.
transformers
6,681
closed
BUILD upgrade to isort v5
The contributing guide says > Right now, we need an unreleased version of isort to avoid a bug: However, it looks like the linked PR has been incorporated into the latest version of isort. I could submit a PR to address this if it would be welcome
08-24-2020 10:55:48
08-24-2020 10:55:48
which linked PR are you referring to?<|||||>this one https://github.com/timothycrosley/isort/pull/1000 (it's linked to here https://huggingface.co/transformers/contributing.html )
transformers
6,680
closed
Tokenizers works different between NFD/NFKD and NFC/NFKC normalize functions in lowercase Turkish(and probably some other languages)
Transformers: 3.0.2 Tokenizers: 0.8.1 Hi. First of all thanks for this great library. This is my first issue opening here. I am working at Loodos Tech as a NLP R&D Engineer in Turkey. We are pretraining and finetuning Turkish BERT/ALBERT/ELECTRA models and publizing them. I found a bug in Tokenizers for Turkish(and possibly some other languages which use non-ASCII alphabet). For example, TEXT = "ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR" bt = BertTokenizer.from_pretrained("loodos/bert-base-turkish-uncased", do_lower_case=True) assert bt.tokenize(TEXT) == ['co', '##cuk', 'san', '##li', '##ur', '##fa', "'", 'dan', 'gelenleri', 'o', '##gun', 'olarak', 'yiyor'] But it should be, assert bt.tokenize(TEXT) == ['çocuk', 'şanlıurfa', "'", 'dan', 'gelenleri', 'öğün', 'olarak', 'yiyor'] Same for ALBERT tokenizer, TEXT = "ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR" at = AlbertTokenizer.from_pretrained("loodos/albert-base-turkish-uncased", do_lower_case=True, keep_accents=False) assert at.tokenize(TEXT) == ['▁c', 'oc', 'uk', '▁san', 'li', 'urfa', "'", 'dan', '▁gelenleri', '▁o', 'gun', '▁olarak', '▁yiyor'] But it should be, assert at.tokenize(TEXT) == ['▁çocuk', '▁şanlıurfa', "'", 'dan', '▁gelenleri', '▁öğün', '▁olarak', '▁yiyor'] This is caused by two things: 1- Vocabulary and sentence piece model is created with **NFC/NFKC** normalization but tokenizer uses **NFD/NFKD** . NFD/NFKD normalization changes text with Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and lost of informations. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish. For BERT and ELECTRA, tokenizers executes this code when **do_lower_case = True**: def _run_strip_accents(self, text): """Strips accents from a piece of text.""" text = unicodedata.normalize("NFD", text) output = [] for char in text: cat = unicodedata.category(char) if cat == "Mn": continue output.append(char) return "".join(output) For ALBERT, tokenizers executes this code when **keep_accents = False**: if not self.keep_accents: outputs = unicodedata.normalize("NFKD", outputs) outputs = "".join([c for c in outputs if not unicodedata.combining(c)]) 2- 'I' is not uppercase of 'i' in Turkish. Python's default lowercase or casefold functions do not care this.(check this: https://stackoverflow.com/questions/19030948/python-utf-8-lowercase-turkish-specific-letter) if is_turkish: lower = lower.replace('\u0049', '\u0131') # I -> ı lower = lower.replace('\u0130', '\u0069') # İ -> i Probably this normalization function error effects some other languages too. For ASCII, NFD and NFC both work same but for Turkish they don't. Could you please give optional parameters for normalization function and is_turkish? We need NFKC normalization and casefold with I->ı. Thanks...
08-24-2020 10:45:45
08-24-2020 10:45:45
Did you experiment with the FastTokenizers from https://github.com/huggingface/tokenizers? cc @n1t0 <|||||>Yes, it is same. ``` from transformers import BertTokenizerFast TEXT = "ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR" bt = BertTokenizerFast.from_pretrained("bert-base-turkish-uncased") print(bt.tokenize(TEXT)) ``` ['co', '##cuk', 'san', '##li', '##ur', '##fa', "'", 'dan', 'gelenleri', 'o', '##gun', 'olarak', 'yiyor'] But it should be: ['çocuk', 'şanlıurfa', "'", 'dan', 'gelenleri', 'öğün', 'olarak', 'yiyor'] We developed custom normalization module [here](https://github.com/Loodos/turkish-language-models/blob/master/text_normalization.py). For now, we use tokenizers like this: ``` from transformers import BertTokenizerFast from text_normalization import TextNormalization bt = BertTokenizerFast.from_pretrained("loodos/bert-base-turkish-uncased", do_lower_case=False) norm = TextNormalization() TEXT = "ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR" LOWER = norm.normalize(TEXT) print(bt.tokenize(LOWER)) ``` and it gives : ['çocuk', 'şanlıurfa', "'", 'dan', 'gelenleri', 'öğün', 'olarak', 'yiyor'] Could you please add config parameters in tokenizer_config.json for: * unicodedata normalization function type(NFD, NFKD, NFC, NFKC) * is_turkish(I->ı, İ->i)<|||||>Hello Sir, Is there any update about this issue?<|||||>Hi @abdullaholuk-loodos, BertTokenizer is based on WordPiece which is a subword segmentation algorithm. It may split a word into more than one piece. In this way, out-of-vocabulary words can be represented. You should not expect to see exact word tokens.<|||||>Hi @erncnerky, thanks for reply. You misunderstood me. I am not mentioning about subword segmentation algorithm. I am talking about normalization algorithm before tokenization. When do_lower_case=True, tokenizer calls _run_strip_accents(self, text) function. https://github.com/huggingface/transformers/blob/447808c85f0e6d6b0aeeb07214942bf1e578f9d2/src/transformers/models/bert/tokenization_bert.py#L420 This function, calls text = unicodedata.normalize("NFD", text) normalization function. NFD normalization is not proper for Turkish because of "ç Ç, ü Ü, ş Ş, ğ Ğ, i İ, ı I" characters. When you change NFD to NFC or NFKC result changes. NFD normalization adds some invisible characters to text when special Turkish characters that I mentioned. NFC normalization does not add these invisible characters. These invisible characters causes different tokenizations. Corpus normalized with NFKC normalization, then subword algorithm run. So it is correct. No invisible characters. But at inference, NFD normalization changes text for Turkish and causes wrong text with invisible characters. Please try that: ``` TEXT1 = "ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR" bt = AutoTokenizer.from_pretrained("loodos/bert-base-turkish-uncased", do_lower_case=True) print(bt.tokenize(TEXT1)) TEXT2 = "çocuk şanlıurfa'dan gelenleri öğün olarak yiyor" bt = AutoTokenizer.from_pretrained("loodos/bert-base-turkish-uncased", do_lower_case=False) print(bt.tokenize(TEXT2)) ``` As you see, TEXT2 is correct lowercase TEXT1, but results are different because of _run_strip_accents's NFD before tokenization. It is also same with albert tokenizer's keep_accent=False parameter. FYI, @julien-c FYI, @n1t0 <|||||>> Hi @erncnerky, thanks for reply. > > You misunderstood me. I am not mentioning about subword segmentation algorithm. I am talking about normalization algorithm before tokenization. > > When do_lower_case=True, tokenizer calls _run_strip_accents(self, text) function. > > https://github.com/huggingface/transformers/blob/447808c85f0e6d6b0aeeb07214942bf1e578f9d2/src/transformers/models/bert/tokenization_bert.py#L420 > > This function, calls text = unicodedata.normalize("NFD", text) normalization function. NFD normalization is not proper for Turkish because of "ç Ç, ü Ü, ş Ş, ğ Ğ, i İ, ı I" characters. When you change NFD to NFC or NFKC result changes. NFD normalization adds some invisible characters to text when special Turkish characters that I mentioned. NFC normalization does not add these invisible characters. These invisible characters causes different tokenizations. > > Corpus normalized with NFKC normalization, then subword algorithm run. So it is correct. No invisible characters. But at inference, NFD normalization changes text for Turkish and causes wrong text with invisible characters. > > Please try that: > > TEXT1 = "ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR" > > bt = AutoTokenizer.from_pretrained("loodos/bert-base-turkish-uncased", do_lower_case=True) > printf(bt.tokenize(TEXT1)) > > TEXT2 = "çocuk şanlıurfa'dan gelenleri öğün olarak yiyor" > > bt = AutoTokenizer.from_pretrained("loodos/bert-base-turkish-uncased", do_lower_case=False) > printf(bt.tokenize(TEXT2)) > > As you see, TEXT2 is correct lowercase TEXT1, but results are different because of _run_strip_accents's NFD before tokenization. > > It is also same with albert tokenizer's keep_accent=False parameter. > > FYI, @julien-c I had seen the problem. Since you gave exact word tokens which are not mostly expected especially for the morphologically rich languages such as Turkish, I wrote the comment. <|||||>> > > > Hi @erncnerky, thanks for reply. > > You misunderstood me. I am not mentioning about subword segmentation algorithm. I am talking about normalization algorithm before tokenization. > > When do_lower_case=True, tokenizer calls _run_strip_accents(self, text) function. > > https://github.com/huggingface/transformers/blob/447808c85f0e6d6b0aeeb07214942bf1e578f9d2/src/transformers/models/bert/tokenization_bert.py#L420 > > > > This function, calls text = unicodedata.normalize("NFD", text) normalization function. NFD normalization is not proper for Turkish because of "ç Ç, ü Ü, ş Ş, ğ Ğ, i İ, ı I" characters. When you change NFD to NFC or NFKC result changes. NFD normalization adds some invisible characters to text when special Turkish characters that I mentioned. NFC normalization does not add these invisible characters. These invisible characters causes different tokenizations. > > Corpus normalized with NFKC normalization, then subword algorithm run. So it is correct. No invisible characters. But at inference, NFD normalization changes text for Turkish and causes wrong text with invisible characters. > > Please try that: > > TEXT1 = "ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR" > > bt = AutoTokenizer.from_pretrained("loodos/bert-base-turkish-uncased", do_lower_case=True) > > printf(bt.tokenize(TEXT1)) > > TEXT2 = "çocuk şanlıurfa'dan gelenleri öğün olarak yiyor" > > bt = AutoTokenizer.from_pretrained("loodos/bert-base-turkish-uncased", do_lower_case=False) > > printf(bt.tokenize(TEXT2)) > > As you see, TEXT2 is correct lowercase TEXT1, but results are different because of _run_strip_accents's NFD before tokenization. > > It is also same with albert tokenizer's keep_accent=False parameter. > > FYI, @julien-c > > I had seen the problem. Since you gave exact word tokens which are not mostly expected especially for the morphologically rich languages such as Turkish, I wrote the comment. Thank you for your interest. Could you mention admins and like issue for taking attention to issue? <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>is there any changes ?<|||||>Any workarounds so far? I came across the same issue.
transformers
6,679
closed
Add Mirror Option for Downloads
This PR will integrate a mirror download source kindly provided by Tsinghua University. This will enormously accelerate downloads from China.
08-24-2020 10:21:58
08-24-2020 10:21:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=h1) Report > Merging [#6679](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8fcbe486e1592321e868f872545c8fd9d359a515?el=desc) will **decrease** coverage by `1.53%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6679/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6679 +/- ## ========================================== - Coverage 80.93% 79.39% -1.54% ========================================== Files 168 168 Lines 32179 32182 +3 ========================================== - Hits 26044 25552 -492 - Misses 6135 6630 +495 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <ø> (+0.27%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.66% <100.00%> (ø)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.45% <100.00%> (+0.04%)` | :arrow_up: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <100.00%> (-0.61%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.23% <100.00%> (+0.02%)` | :arrow_up: | | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: | | ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=footer). Last update [8fcbe48...6bb70c7](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@julien-c I updated the doc to not encourage the users to use this option.
transformers
6,678
closed
Can't load config for New Community Model
API says "Can't load config for 'donal/Pro_Berta'. Make sure that: - 'donal/Pro_Berta' is a correct model identifier listed on 'https://huggingface.co/models' - or 'donal/Pro_Berta' is the correct path to a directory containing a config.json file". But I followed the instructions to the letter. Do not know what's the issue. Please fix. https://huggingface.co/donal/Pro_Berta?text=The+goal+of+life+is+%3Cmask%3E. Pinging @mfuntowicz, @julien-c, All the files seem to be in the right place. Your file now lives at: https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/merges.txt Your file now lives at: https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/special_tokens_map.json Your file now lives at: https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/training_args.bin Your file now lives at: https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/pytorch_model.bin Your file now lives at: https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/config.json Your file now lives at: https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/tokenizer_config.json Your file now lives at: https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/vocab.json
08-24-2020 09:42:29
08-24-2020 09:42:29
Edit so it appears to work with: from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("donal/Pro_Berta") model = AutoModelWithLMHead.from_pretrained("donal/Pro_Berta") Could just be an issue with the API? maybe needs some time to load correctly? <|||||>They fixed it
transformers
6,677
closed
Batch encore plus and overflowing tokens fails when non existing overflowing tokens for a sequence
closes #6632
08-24-2020 08:56:08
08-24-2020 08:56:08
I face this issue as well, and I agree this PR will fix it. Thanks for the PR, I can now fix it on my local :P<|||||>@LysandreJik I encountered the same issue, glad you found a way to fix it. some tests are failing - maybe that's the reason this PR is not being merged?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=h1) Report > Merging [#6677](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ed71c21d6afcbfa2d8e5bb03acbb88ae0e0ea56a?el=desc) will **increase** coverage by `0.33%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6677/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6677 +/- ## ========================================== + Coverage 79.51% 79.85% +0.33% ========================================== Files 164 164 Lines 31022 31023 +1 ========================================== + Hits 24668 24773 +105 + Misses 6354 6250 -104 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <100.00%> (+<0.01%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (-5.02%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+2.28%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=footer). Last update [ed71c21...4e4bfb3](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,676
closed
Allow numpy array as tokenizer input
Tokenizers allow numpy arrays since https://pypi.org/project/tokenizers/0.9.0.dev0/ thanks to @n1t0 This is related to issue #5729
08-24-2020 08:37:17
08-24-2020 08:37:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=h1) Report > Merging [#6676](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f230a640941ef11b077c953cbda01aa981e1ec9a?el=desc) will **increase** coverage by `0.94%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6676/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6676 +/- ## ========================================== + Coverage 79.00% 79.94% +0.94% ========================================== Files 156 156 Lines 28248 28249 +1 ========================================== + Hits 22317 22584 +267 + Misses 5931 5665 -266 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <ø> (ø)` | | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.32% <100.00%> (+0.04%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.16% <0.00%> (-0.26%)` | :arrow_down: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=footer). Last update [16e3894...f693155](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Let me know when you release 0.9.0 @n1t0 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@lhoestq I don't know if this is still relevant, but this has definitely been released now!<|||||>Cool ! Will update the PR tomorrow thanks :) <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
6,675
closed
ner example failed on examples/token-classification % bash run.sh
(.venvpy36) examples/token-classification % bash run.sh 08/24/2020 15:56:12 - INFO - filelock - Lock 5754732216 acquired on ./cached_train_BertTokenizer_128.lock 08/24/2020 15:56:12 - INFO - utils_ner - Creating features from dataset file at . 08/24/2020 15:56:12 - INFO - utils_ner - Saving features into cached file ./cached_train_BertTokenizer_128 08/24/2020 15:56:12 - INFO - filelock - Lock 5754732216 released on ./cached_train_BertTokenizer_128.lock 08/24/2020 15:56:12 - INFO - filelock - Lock 5754732048 acquired on ./cached_dev_BertTokenizer_128.lock 08/24/2020 15:56:12 - INFO - utils_ner - Creating features from dataset file at . 08/24/2020 15:56:12 - INFO - utils_ner - Writing example 0 of 9 08/24/2020 15:56:12 - INFO - filelock - Lock 5754732048 released on ./cached_dev_BertTokenizer_128.lock Traceback (most recent call last): File "run_ner.py", line 304, in <module> main() File "run_ner.py", line 189, in main if training_args.do_eval File "/Users/yuanke/ghSrc/transformers/examples/token-classification/utils_ner.py", line 127, in __init__ pad_token_label_id=self.pad_token_label_id, File "/Users/yuanke/ghSrc/transformers/examples/token-classification/utils_ner.py", line 305, in convert_examples_to_features label_ids.extend([label_map[label]] + [pad_token_label_id] * (len(word_tokens) - 1)) KeyError: '[null,"AIzaSyCF97XfLoejM9NhWDAZeOcjC6kOEsEmv6A","897606708560-a63d8ia0t9dhtpdt4i3djab2m42see7o.apps.googleusercontent.com",null,null,"v2",null,null,null,null,null,null,null,"https://content.googleapis.com","SITES_%s",null,null,null,null,null,0,null,null,null,["AHKXmL0ZzONWw2TXF2GVALSixIY_wY8DFDhrOeiPL5czjvgRVJRjibFVAqFSDdzkAGNCFzy2FNRZ",1,"CJDXlfOos-sCFZWnIwAdnZoJHA",1598254209133000,[5703022,5703839,5704621,5705837,5705841,5706601,5706832,5706836,5707711,5709888,5710567,5710768,5710806,5711078,5711206,5711530,5711563,5711808,5711866,5711929,5712328,5713049,5714628,14100031,14100834,14100854,14101054,14101218,14101254,14101334,14101346,14101350,14101354,14101358,14101374,14101378,14101386,14101410,14101418,14101430,14101442,14101446,14101458,14101462,14101474,14101492]' - `transformers` version: latest - Platform: Mac - Python version: py3.6 - PyTorch version (GPU?): 1.4 - Tensorflow version (GPU?): - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
08-24-2020 08:00:35
08-24-2020 08:00:35
I also get this same error on this example.<|||||>Hi @loveJasmine and @isoboroff , this should be fixed in latest `master` version (I'm currently training a model with this example script, preprocessing was fine) :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,674
closed
Add model card for singbert.
Adding a model card for singbert- bert for singlish and manglish.
08-24-2020 06:17:45
08-24-2020 06:17:45
@JetRunner Just a final addition of couple of more examples and customized the widget inputs, good to go :)<|||||>Not sure if I should bring this up here or raise a new issue, but when i tested my widget (hosted inference), i got this error: ``` Can't load config for 'zanelim/singbert'. Make sure that: - 'zanelim/singbert' is a correct model identifier listed on 'https://huggingface.co/models' - or 'zanelim/singbert' is the correct path to a directory containing a config.json file ``` However the config file is there as confirmed by ``` >>> transformers-cli s3 ls singbert/config.json 2020-08-24T05:54:28.000Z "6004cb287370530537f4076c9cf7fdbe" 471 singbert/pytorch_model.bin 2020-08-24T05:53:37.000Z "c060a644d84e55b2f93aa67fb1f35956" 440509997 singbert/special_tokens_map.json 2020-08-24T05:54:23.000Z "8b3fb1023167bb4ab9d70708eb05f6ec" 112 singbert/tf_model.h5 2020-08-24T05:52:23.000Z "45a8eea544f73079768bb136fe3d0a27" 536061440 singbert/tokenizer_config.json 2020-08-24T05:54:25.000Z "8b3fb1023167bb4ab9d70708eb05f6ec" 112 singbert/vocab.txt 2020-08-24T05:53:32.000Z "767659dd848f37f6937a0ffb833ee6b1" 224170 ```<|||||>> Not sure if I should bring this up here or raise a new issue, but when i tested my widget (hosted inference), i got this error: > > ``` > > Can't load config for 'zanelim/singbert'. Make sure that: - 'zanelim/singbert' is a correct model identifier listed on 'https://huggingface.co/models' - or 'zanelim/singbert' is the correct path to a directory containing a config.json file > > ``` > > However the config file is there as confirmed by > > ``` > > >>> transformers-cli s3 ls > > singbert/config.json 2020-08-24T05:54:28.000Z "6004cb287370530537f4076c9cf7fdbe" 471 > > singbert/pytorch_model.bin 2020-08-24T05:53:37.000Z "c060a644d84e55b2f93aa67fb1f35956" 440509997 > > singbert/special_tokens_map.json 2020-08-24T05:54:23.000Z "8b3fb1023167bb4ab9d70708eb05f6ec" 112 > > singbert/tf_model.h5 2020-08-24T05:52:23.000Z "45a8eea544f73079768bb136fe3d0a27" 536061440 > > singbert/tokenizer_config.json 2020-08-24T05:54:25.000Z "8b3fb1023167bb4ab9d70708eb05f6ec" 112 > > singbert/vocab.txt 2020-08-24T05:53:32.000Z "767659dd848f37f6937a0ffb833ee6b1" 224170 > > ``` Don't worry. It's sometimes flaky. As long as you can load the model with from_pretrained and you are good to go. 😉
transformers
6,673
closed
New training arg: warmup_ratio
# 🚀 Feature request When training or fine-tuning a transformer model, people usually warmup for 10% training steps. For now, transformers only provide the parameter of warmup_steps. A warmup_ratio parameter can be helpful, it means warmup for some percentage of total training steps. <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation To use the parameter of warmup_steps, people need to know the total training steps. When people use the training epoch parameter instead of max_steps, it is hard to know the total training steps. A warmup_ratio parameter get rid of people knowing total training steps. Another reason for using warmup_ratio parameter is it can help people write less hard code. People have different total training steps for different dataset, but people usually set warmup_ratio as 10% as default. Original Usage may like this: ` python run_ner.py --data_dir some_data_dir \ --model_name_or_path some_model \ --output_dir some_output_dir \ --max_seq_length 512 \ --num_train_epochs 10 \ --warmup_steps 35 \ --per_device_train_batch_size 8 \ --do_train \ ` New usage may like this: ` python run_ner.py --data_dir some_data_dir \ --model_name_or_path some_model \ --output_dir some_output_dir \ --max_seq_length 512 \ --num_train_epochs 10 \ --warmup_ratio 0.1 \ --per_device_train_batch_size 8 \ --do_train \ ` Also, we can merge warmup_step and warmup_ratio into one parameter. If user input a number 0 <= x < 1, it will be considered as warmup_ratio. If user input an interger, it will be considered as warmup_step. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution I can submit a PR to complete this feature. If similar feature is alreadly in this repo, please just close this issue. <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
08-24-2020 01:21:23
08-24-2020 01:21:23
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Well, I think it is a useful feature since PLMs like RoBERTa require setting task-specific warmup-steps, which is annoying. But the design of merge warmup-steps and warmup-ratio together may not be a good idea. <|||||>Hi @TobiasLee, Agreed with not merging both `warmup_steps` and `warmup_ratio` into a single parameter. It seems cleaner to give higher precedence to one over the other in case both are given by user. I have raised a [PR](https://github.com/huggingface/transformers/pull/10229) with the same implemented. Your review is appreciated!
transformers
6,672
closed
TFTrainer with TPUs: Here's a suggestion on getting it to work
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes ## Analysis and Temporary Solution The approach in `TFTrainer` and `TFTrainingArguments` is really good, but it's not working right now on TPUs. It looks like we need to do some work on updating the trainer. There are a number of errors on this, the common being gradient accumulation (#6479) and `Unable to parse tensor proto`. Since Julien is on vacation, here's some things I did to get it to train on Colab with TPUs. It's hacky, but should be able to get it to work if you're anxious to use TPUs in until Julien has a fix: - The `strategy` loading order in `TFTrainingArguments` and `TFTrainer` doesn't play well with a typical workflow (process data, create `training_args`, load model and pass to `TFTrainer`). The model needs to be loaded after the strategy has been initialized, and right now the strategy is being initialized inside of `TFTrainer.` - Shuffle, batch etc. need to be called prior to instantiating the strategy. I think this has something to do with the way the strategy is defined in `TFTrainingArguments`. - Calling `training_args.training_batch_size` automatically calculates the number of TPU cores. Unfortunately, this causes the strategy to initialize, so this cannot be used to calculate `total_train_batch_size` with the current strategy implementation because it will prematurely initialize before shuffle, batch, etc. are done. - To avoid the `Unable to parse tensor proto`, shuffle, batch etc. will need to be pulled from `TFTrainer`. They're handled by the `TFTrainer` method `get_train_tfdataset`. With the current strategy implementation in TFTrainingArguments, you'll need to do that after shuffle, batch and before loading the model. ## Example with GPT2 Here's a example implementing the above changes: ``` # Note: you'll need to build transformers from source # Grab a temporary version of TFTrainer with get_train_tfdataset pulled out git clone https://github.com/alexorona/lm_tf_trainer from lm_tf_trainer import LMTFTrainer # Pulled out of TFTrainer def get_train_tfdataset(train_dataset, training_args, train_batch_size, gradient_accumulation_steps, dataloader_drop_last = False, seed = 40): total_train_batch_size = train_batch_size * gradient_accumulation_steps num_train_examples = tf.data.experimental.cardinality(train_dataset).numpy() if num_train_examples < 0: raise ValueError("The training dataset must have an asserted cardinality") ds = ( train_dataset.repeat() .shuffle(num_train_examples, seed=seed) .batch(total_train_batch_size, dataloader_drop_last) .prefetch(tf.data.experimental.AUTOTUNE) ) return training_args.strategy.experimental_distribute_dataset(ds), num_train_examples # Get Training Args training_args, num_train_examples = TFTrainingArguments(...) # Create a normal training_args object # Manual settings to avoid prematurely initializing the strategy tpu_cores = 8 train_batch_size = tpu_cores * training_args.per_device_train_batch_size # Formatting tf dataset from lists of different kinds of inputs input_ids = tf.convert_to_tensor(train_input_ids) # train_input_ids[0] is a list of input ids and train_input_ids is a list of lists attention_mask = tf.convert_to_tensor(attention_mask) # as above token_type_ids = tf.convert_to_tensor(token_type_ids) # as above labels = tf.convert_to_tensor(labels) # as above train_inputs = {'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids} train_dataset = tf.data.Dataset.from_tensor_slices((train_inputs, train_labels)) # Now, call the function to do shuffle, batch and initialize the strategy train_ds = get_train_tfdataset(train_dataset = train_dataset , training_args = training_args, train_batch_size = train_batch_size , gradient_accumulation_steps = training_args.gradient_accumulation_steps ) # Then, load the model with the strategy with training_args.strategy.scope(): model = TFGPT2LMHeadModel.from_pretrained('gpt2-medium') # Now, train it trainer = LMTFTrainer(args = training_args, model = model, num_train_examples = num_train_examples, total_train_batch_size = 8, train_dataset = train_ds) trainer.train() ```
08-23-2020 18:01:31
08-23-2020 18:01:31
cc @jplu for when he comes back<|||||>Thanks a lot @alexorona for this explicit issue. Indeed the TF Trainer was working with usual TPU creation on GCP but not with Colab and investigating on this was one of my priotities when back from vacation. Apparently, TPU on Colab is more restrictive which is good to have a better implementation :+1: There are two small things we still have to think about from your proposal: 1) The preprocessing is not anymore in the Trainer. A solution might be to make the function static to be used without the need to instanciate the Trainer. 2) We have to be careful when calling `training_args.training_batch_size` only when not running on TPU and use the `tpu_num_cores` argument instead.<|||||>Can you train on TPU without loading tfrecords from a remote bucket? I usually got an error `[local] not supported` and assumed that TPU does not support loading datasets directly.<|||||>You cannot load/save data on your local environment, everything must be in a GCS bucket.<|||||>There is some sample notebook on how to finetune TFGPT2LMHeadModel and about making that tfrecords. <|||||>@jplu Great points, Julien. The proposal above is just a temporary work-around. From a user perspective, there really aren't any options in `get_train_tfdataset` that haven't already been declared elsewhere, so this is a routine task with no value in exposing it to the user. Therefore, it should be hidden _somewhere_. The question is whether that _somewhere_ is in the `TFTrainer` or in `TFTrainingArguments`. From a library management perspective, there are a lot of considerations, including how similar `TFTrainer` and `TFTrainingArguments` are to `Trainer` and `TrainingArguments` for pytorch. You want these classes to behave as similarly as possible. With that in mind, here are the options from best to worst: 1. See if there's a way to modify the current `TFTrainingArguments` tpu initialization procedure so that `get_train_tfdataset` can be left in `TFTrainer`. The model is still likely to be initialized outside of the scope, so a full-proof way of dealing with this is to re-initialize the model when `trainer.train()` is called by adding something like this in `train.train()`: ``` with args.strategy.scope(): self.model = self.model ``` 2. Barring that, it might be possible to initialize the strategy when `TFTrainingArguments` is first declared. In that case, `get_train_tfdataset` could be placed inside of `TFTrainingArguments`. We'd also need to know in the documentation that the model has to be loaded after `TFTrainingArguments` and with the clause `with training_args.strategy.scope():` coming before the line that loads the model. @volker42maru I haven't had any problems with loading TF data records directly. Can you restructure so that the dataset is something like `tf.data.Dataset.from_tensor_slices((train_inputs, train_labels))`? Are you sure your batch size is equal to at least the the number of tensor cores and you're calling` strategy.experimental_distribute_dataset(dataset)` somewhere? I've been able to load and transform data just fine on Colab. You can also connect to Google Drive too and use it as a disk with: ``` from google.colab import drive drive.mount('/content/drive') ```<|||||>@jplu @alexorona Hey i want small help please see my colab notebook. i am trying to finetune gpt2 its showing training but not returning loss and please confirm me that its using tpu or not. https://colab.research.google.com/drive/1IqXH0_VZ8LqgnbgjP3GqRXecChVitcea?usp=sharing If i am doing something wrong please let me know.<|||||>Thanks @alexorona! I'm gonna investigate to be able to create the strategy in `trainer.train()` once all the datasets have been created and not in `TFTrainingArguments` anymore.<|||||>I tried a fix in the PR #6880. @alexorona Can you try it and tell me if it works for your env?<|||||>@jplu I'm getting that `Unable to parse tensor proto` error. Did you merge the changes into TF Trainer already?<|||||>No it is not merged, that's why if you could try the PR, it would be nice to have your feedback if it works or not.<|||||>@jplu Your approach looks great! I setup two dev notebook so you can see the remaining challenges: - It looks like there's an expected `args.training_args` attribute that isn't there. Maybe the changes to `TFTrainingArguments` didn't make it to the fork? I had to revert most instances of `self.args.training_args` to `self.args` to get to the next step. - `from_pt=bool(".bin" in self.args.model_name_or_path)` was throwing an error, but this might be due to `TFTrainingArguments` problem above. - The current strategy implementation is running out of resources on models we know it can train on (gpt2-medium 1024 tokens). This is speculation, but it might be because `strategy.experimental_distribute_dataset(ds`) isn't being used in `get_train_tfdataset` anymore. - `tb_writer` was also causing problems, so I had to comment that out too - `self.model.ckpt_manager.save()` is throwing `File system scheme '[local]' not implemented` - Special tokens are sometimes added to the model, especially when using this for dialogue-style generation. Maybe add a parameter `tokenizer_length = None` to the class on `__init__` and then replace with this: ``` if not self.model: with self.strategy.scope(): self.model = self.model_class.from_pretrained( self.args.model_name_or_path, # from_pt=bool(".bin" in self.args.model_name_or_path), config=self.config, cache_dir=self.argscache_dir, ) if self.tokenizer_length: self.model.resize_token_embeddings(self.tokenizer_length) ```<|||||>> * It looks like there's an expected `args.training_args` attribute that isn't there. Maybe the changes to `TFTrainingArguments` didn't make it to the fork? I had to revert most instances of `self.args.training_args` to `self.args` to get to the next step. > * `from_pt=bool(".bin" in self.args.model_name_or_path)` was throwing an error, but this might be due to `TFTrainingArguments` problem above. Yes, you have to modify your main file, look at the `run_tf_ner.py` example to see what has changed. > * The current strategy implementation is running out of resources on models we know it can train on (gpt2-medium 1024 tokens). This is speculation, but it might be because `strategy.experimental_distribute_dataset(ds`) isn't being used in `get_train_tfdataset` anymore. `strategy.experimental_distribute_dataset(ds)` are now respectively in the `train` and `prediction_loop` methods. > * `tb_writer` was also causing problems, so I had to comment that out too > * `self.model.ckpt_manager.save()` is throwing `File system scheme '[local]' not implemented` You have to give the log/input/output directories as a GCS path and not a local path. > * Special tokens are sometimes added to the model, especially when using this for dialogue-style generation. Maybe add a parameter `tokenizer_length = None` to the class on `__init__` and then replace with this: > > ``` > if not self.model: > with self.strategy.scope(): > self.model = self.model_class.from_pretrained( > self.args.model_name_or_path, > # from_pt=bool(".bin" in self.args.model_name_or_path), > config=self.config, > cache_dir=self.argscache_dir, > ) > if self.tokenizer_length: > self.model.resize_token_embeddings(self.tokenizer_length) > ``` This is a temporary solution, the model will soon be created into a `model_init` closure that one has to provide to the Trainer as argument.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,671
closed
Value Error & dev file parameter: run_squad.py BERT QA finetuning
# ❓ Questions & Help Hello, I have a question about implementing run_squad.py finetuning. I have 2 questions. **1. --dev_file parameter** I have 3 datasets : train, dev, and predict file. However, I discovered that run_squad.py finetuning seems it does not supply `--dev_file` parameter, so I can use only 2 dataset(train and predict). Is there any way to use dev_file for evaluation? Or how can I check the accuracy score? **2. Value Error** I just implemented the run_squad.py script using train and predict file, and I got several errors while implementing the script. Due to CUDA out of memory error, I set the 'max_seq_length' as 64 and per_gpu_train_batch_size' as 2, and it didn't bring CUDA out of memory error, but instead, it brought 'Valueerror expected sequence of length 64 at dim 1 (got 878) '. Could you give some idea to fix this error? Thank you,
08-23-2020 11:17:22
08-23-2020 11:17:22
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,670
closed
Pretrained GPT2DoubleHeadsModel
#6667 Hello, This is a follow-up post for #6667. So just so that I am understanding this correctly, the main body (excluding the 2 output heads) of the pre-trained `GPT2DoubleHeadsModel` are indeed the pre-trained weights and biases, but the 2 output heads are not pre-trained? Thank you,
08-22-2020 22:19:31
08-22-2020 22:19:31
One of the two output heads is the language modeling head, which is tied to the embeddings. This is already trained, as the embeddings were trained during pre-training. The second output head is a multiple choice head, which was not pre-trained. You would need to fine-tune it on a multiple choice dataset so that it works in your case.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,669
closed
Inconsistent handling of empty string in tokenizers
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no (issue with tokenizer) - Using distributed or parallel set-up in script?: no (issue with tokenizer) ### Who can help @mfuntowicz ## Information I'm encountering inconsistent handling of empty string with `BertTokenizerFast` when tokenizing pairs. In particular, I'm observing an error when one string in a text pair is empty AND truncation is performed using the `longest_first` strategy. This issue only manifests when truncation actually occurs. If one of the strings are empty, and the other is short enough that truncation does not occur (or both strings are empty), then no error occurs (see example below). I haven't checked other tokenizers to see if they exhibit similar behavior. ## Example ```python from transformers import BertTokenizerFast tokz = BertTokenizerFast.from_pretrained('bert-base-uncased') empty = '' short = 'the ' * 509 long = 'the ' * 510 # Case 1: no truncation, no error tokz(empty, empty, padding=True, truncation='longest_first', return_tensors='pt', max_length=512) # Case 2: no truncation, no error tokz(empty, short, padding=True, truncation='longest_first', return_tensors='pt', max_length=512) # Case 3: truncation, no error tokz(long, long, padding=True, truncation='longest_first', return_tensors='pt', max_length=512) # Case 4: truncation, Truncation error tokz(empty, long, padding=True, truncation='longest_first', return_tensors='pt', max_length=512) ``` ## Possible Cause This appears to be due to logic in the tokenizers package that throws an error if any of the strings has length 0 after truncation. https://github.com/huggingface/tokenizers/blob/331e3ffc257ec2792ad88f6ff820d335859ed775/tokenizers/src/utils/truncation.rs#L100 I assume there are some checks occurring that prevent this code path from being hit in the other cases above, but I wasn't able to identify where. . ## Stacktrace ``` Exception Traceback (most recent call last) <ipython-input-22-dda0aff18100> in <module> ----> 1 tokz('', 'word ' * 510, padding=True, truncation='longest_first', return_tensors='pt', max_length=512) ~/anaconda3/envs/aq/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, s$ride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_ma$ping, return_length, verbose, **kwargs) 1667 return_length=return_length, 1668 verbose=verbose, -> 1669 **kwargs, 1670 ) 1671 ~/anaconda3/envs/aq/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in encode_plus(self, text, text_pair, add_special_tokens, padding, truncation, max_length$ stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets$mapping, return_length, verbose, **kwargs) 1735 return_length=return_length, 1736 verbose=verbose, -> 1737 **kwargs, 1738 ) 1739 ~/anaconda3/envs/aq/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in _encode_plus(self, text, text_pair, add_special_tokens, padding_strategy, truncation_s$rategy, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_m$sk, return_offsets_mapping, return_length, verbose, **kwargs) 418 return_length=return_length, 419 verbose=verbose, --> 420 **kwargs, 421 ) 422 ~/anaconda3/envs/aq/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strateg$, truncation_strategy, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_s$ecial_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 329 *batch_text_or_text_pairs[0], 330 add_special_tokens=add_special_tokens, --> 331 is_pretokenized=is_pretokenized, 332 ) 333 else: ~/anaconda3/envs/aq/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py in encode(self, sequence, pair, is_pretokenized, add_special_tokens) 210 raise ValueError("encode: `sequence` can't be `None`") 211 --> 212 return self._tokenizer.encode(sequence, pair, is_pretokenized, add_special_tokens) 213 214 def encode_batch( Exception: Truncation error: Specified max length is too low to respect the various constraints ``` ## To reproduce See example above ## Expected behavior The handling of empty strings (cases 1, 2, and 4) should be consistent (either empty string are ok, or they result in an error). edit: grammar
08-22-2020 21:57:41
08-22-2020 21:57:41
After looking into this a bit, I believe it is isolated to the tokenizers package. If `BertTokenizerFast` is replaced with `BertTokenizer` above, no error is raised. I've opened an issue with a proposed solution there.
transformers
6,668
closed
Zero-Shot-Classification: multi_class or multi_label?
https://github.com/huggingface/transformers/blob/068df740bd73b95e9a1e233e47608df942fda9da/src/transformers/pipelines.py#L1048 Since we're allowing multiple labels to be true, shouldn't this be called `multi_label` instead of `multi_class`?
08-22-2020 21:12:16
08-22-2020 21:12:16
cc @joeddav <|||||>Yeah I think `multi_label` probably would have been better in retrospect given that many do seem to make this distinction between terms. Since it's already being widely used, though, I think we'll keep it as is for the moment. If it turns out to cause consistent confusion for people we can look at changing it later on.<|||||>Leaving a comment and hoping this to be reopened. There are two major classification tasks (see for instance: https://towardsdatascience.com/journey-to-the-center-of-multi-label-classification-384c40229bff#:~:text=Difference%20between%20multi%2Dclass%20classification,the%20tasks%20are%20somehow%20related.): A multiclass classification problem is a classification task with more than two classes, that makes the assumption that each sample is assigned to one and only one label: an animal can be wither a cat or a dog. A multilabel classification assigns a set of target labels to each sample. A text might have multiple categories, for instance politics, finance and education, or none of these. The multi_class option here, works exactly opposite. You have to set it to multi_class=False to make it behave like multi class classification problem, and multi_class=True to make it multi label. It is really confusing. Switching the behaviour would probably lead to even more confusion. My suggestion would be to depreciate "multi_class=True/False" and instead add the parameter "multi=label(default)/class. Probably not ideal, but less confusing.<|||||>@peregilk I think you're right, it's consistently been a little bit confusing. I don't think the parameter names need to **exactly** reflect the vernacular, but I do think we can deprecate `multi_class` and rename it to `multi_label` while keeping the behavior the same. I think that's the solution with the least confusion. I'll send a PR. The only case where it's not multi-class is when only a single label is passed, in which case the `multi_class` argument is treated as true since it doesn't make sense to softmax over a single label.<|||||>That sounds like a great solution. Thanks.
transformers
6,667
closed
Why does the median cross entropy loss change when I change the random seed?
Hello, I've noticed that, when I use the *pre-trained* `GPT2DoubleHeadsModel` to process multiple choice questions, the median of the cross entropy loss generated for the same set of multiple choice questions change when I change the type of my random seed (NOTE: I changed my random seed *before* loading the pre-trained BTP GPT-2 tokenizer and loading the pre-trained `GPT2DoubleHeadsModel` ... I also did `my_gpt2_model.eval()` before evaluating the loss to prevent dropout). Why does this occur? I thought the parameters of both the pre-trained model and the tokenizer are fixed, so to me, the cross entropy loss should be the same regardless of the type of random seed? For more information, below are my code: ```python # for our main experiment, we use G1G2, G4G5, G7G8, G10G12 files def fill_MC_loss_tensor( ...): for m in range(num_mc_questions): # make an empty list to store the mc_loss mc_loss_list = [] # Turn on the evaluation mode best_model_gpt2DoubleHeadsModel.eval() # for each layer j = 1,...,12, extract the hidden states at the layer j input_hidden_state = best_model_gpt2DoubleHeadsModel(input_ids, token_type_ids = token_type_ids, attention_mask = attention_mask)[3][0][:,:,:].detach() for j in range(nlayer): # Turn on the evaluation mode layer_hidden_state = best_model_gpt2DoubleHeadsModel.transformer.h[j](input_hidden_state) # feed the hidden states from each layer directly into the multiple-choice head mc_logits = best_model_gpt2DoubleHeadsModel.multiple_choice_head(layer_hidden_state[0]).squeeze(-1).detach() del layer_hidden_state gc.collect() # define the loss function loss_fct = CrossEntropyLoss() # calculate the mc_loss mc_loss = loss_fct(mc_logits.view(-1, mc_logits.size(-1)), mc_labels.view(-1)) # store the mc_loss in a list mc_loss_list = mc_loss_list + [mc_loss.tolist()] del mc_logits gc.collect() mc_loss_tensor[m,:] = torch.tensor(mc_loss_list) print('m={}'.format(m)) return mc_loss_tensor # main function for analysis def main_function(...): # set initial seed seed(125) num_iter = 200 # define mc_loss_tensor_num_iter mc_loss_tensor_num_iter = torch.zeros(num_iter, int(num_mc_questions), nlayer) mc_loss_tensor_num_iter[mc_loss_tensor_num_iter == 0] = nan for i in range(num_iter): # change seed at each iteration s = randint(1,999999) seed(s) # import the pre-trained HuggingFace GPT2Tokenizer gpt2_tokenizer = GPT2Tokenizer.from_pretrained('gpt2') # make a dictionary of special tokens special_tokens_dict = {'pad_token': '<pad>'} # add the special tokens to the tokenizer gpt2_tokenizer.add_special_tokens(special_tokens_dict) assert gpt2_tokenizer.pad_token == '<pad>' # get the encoding for the special tokens pub2_pad_token_id = gpt2_tokenizer.convert_tokens_to_ids('<pad>') pub2_eos_token_id = gpt2_tokenizer.convert_tokens_to_ids(gpt2_tokenizer.eos_token) # sanity check len(gpt2_tokenizer) # note: original size of the tokenizer is 50257 + <pad> = 50258 # get the pre-trained HuggingFace GPT2DoubleHeadsModel and # resize the token embeddings after adding the special token best_model_gpt2DoubleHeadsModel = GPT2DoubleHeadsModel.from_pretrained('gpt2', output_hidden_states = True) best_model_gpt2DoubleHeadsModel.resize_token_embeddings(len(gpt2_tokenizer)) ####### # make an empty tensor to store mc loss mc_loss_tensor = torch.zeros(num_mc_questions, nlayer).float() mc_loss_tensor[mc_loss_tensor == 0] = nan mc_loss_tensor = fill_MC_loss_tensor(...) if torch.isnan(mc_loss_tensor).any().tolist(): sys.exit('nan found in mc_loss_tensor') mc_loss_tensor_num_iter[i,:,:] = mc_loss_tensor print('i={}'.format(i)) return mc_loss_tensor_num_iter # for each of the 200 iteration, the computed median # (median over all questions) # cross entropy loss are different, # for the same layer. >>> main_function(...) ``` Thank you,
08-22-2020 19:14:27
08-22-2020 19:14:27
Hello! The `GPT2DoubleHeadsModel` model has a multiple choice head which generally isn't initialized from GPT-2 checkpoints. The `gpt2` checkpoint doesn't contain weights for that head as the pre-training didn't involve a multiple-choice task. If you're on the latest version of `transformers` you should see such a warning: ``` Some weights of GPT2DoubleHeadsModel were not initialized from the model checkpoint at gpt2 and are newly initialized: [... 'multiple_choice_head.summary.weight', 'multiple_choice_head.summary.bias', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` You should first fine-tune your model on a multiple-choice task.
transformers
6,666
closed
added multiple model_cards for below models
Hi, I added model_cards for these models * codeswitch-hineng-ner-lince * codeswitch-hineng-pos-lince * codeswitch-nepeng-lid-lince * codeswitch-spaeng-ner-lince * codeswitch-spaeng-pos-lince also update model card for this two model * codeswitch-hineng-lid-lince * codeswitch-spaeng-lid-lince Please check. If possible please merge. thanks and regards Sagor
08-22-2020 17:05:10
08-22-2020 17:05:10
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=h1) Report > Merging [#6666](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f230a640941ef11b077c953cbda01aa981e1ec9a?el=desc) will **increase** coverage by `0.62%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6666/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6666 +/- ## ========================================== + Coverage 79.00% 79.62% +0.62% ========================================== Files 156 156 Lines 28248 28248 ========================================== + Hits 22317 22493 +176 + Misses 5931 5755 -176 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.16% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.30%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=footer). Last update [16e3894...6d88a30](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,665
closed
Finetune.sh showing killed
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I am getting this very strange issue while running the finetuning script and have absolutely no idea what is going wrong. ./finetune.sh: line 14: 9040 Killed python finetune.py --learning_rate=3e-5 --fp16 --gpus 1 --do_train --do_predict --n_val 1000 --val_check_interval 0.1 "$@" <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
08-22-2020 15:05:24
08-22-2020 15:05:24
Hi, can you post the full command and your env info<|||||>lets all hang out in #6711
transformers
6,664
closed
convert_BertForQuestionAnswering_pytorch_checkpoint_to_tf
check src/transformers/convert_BertForQuestionAnswering_pytorch_checkpoint_to_tf.py file i only make this file which can convert BertForQuestionAnswering pytorch models into in TFBertForQuestionAnswering.
08-22-2020 14:35:40
08-22-2020 14:35:40
transformers
6,663
closed
added model_card for model codeswitch-hineng-lid-lince and codeswitch-spaeng-lid-lince
Hi, Thank you so much for this beautiful, awesome repo. I added model card for model `codeswitch-hineng-lid-lince` and `codeswitch-spaeng-lid-lince`. Please check and if possible please merge. thanks and regards Sagor
08-22-2020 12:58:59
08-22-2020 12:58:59
Looks great! Maybe also add a link to https://github.com/sagorbrur/codeswitch?<|||||>Hi @julien-c, thank you so much. regards Sagor
transformers
6,662
closed
Integer division of tensors using div or / is no longer supported torch
https://github.com/huggingface/transformers/blob/97bb2497abbbf978a0f78f1d414a7b45539e795b/examples/seq2seq/bertabs/modeling_bertabs.py#L885 Line throws an error "integer division of tensors using div or / is no longer supported torch" when executing `bertabs/run_summarization.py` Is replacing `.div()` with `.floor_divide()` the correct solution here?
08-22-2020 12:48:51
08-22-2020 12:48:51
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,661
closed
Sequence packing
# 🚀 Feature request Add sequence packing support ## Motivation Faster training, higher utilization, replicate experiments. See https://github.com/google-research/text-to-text-transfer-transformer/issues/365 ## Your contribution I think it makes sense doing something similar to other frameworks which already have this implemented (e.g https://github.com/tensorflow/mesh/blob/6a812c8bb847e081e976533ed497c7c5016bb1ec/mesh_tensorflow/transformer/dataset.py#L474-L504 )
08-22-2020 07:12:59
08-22-2020 07:12:59
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,660
closed
Create PULL_REQUEST_TEMPLATE.md
Proposing to copy this neat feature from pytorch. This is a small template that let's a PR submitter tell which issue that PR closes. Here is an example of it in action, the end of the top post: https://github.com/huggingface/transformers/pull/6659
08-22-2020 05:08:55
08-22-2020 05:08:55
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=h1) Report > Merging [#6660](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f94151dc7809128b40ab68ba164742fe1c5b4e6?el=desc) will **increase** coverage by `0.64%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6660/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6660 +/- ## ========================================== + Coverage 79.01% 79.65% +0.64% ========================================== Files 156 156 Lines 28248 28248 ========================================== + Hits 22320 22502 +182 + Misses 5928 5746 -182 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6660/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6660/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.30%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6660/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=footer). Last update [0f94151...73edfdc](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,659
closed
[doc] remove BartForConditionalGeneration.generate
As suggested here: https://github.com/huggingface/transformers/issues/6651#issuecomment-678594233 this removes a generic `generate` doc with a large group of generate examples, none of which is relevant to BART. the BART class pre-amble doc already provides the examples. Fixes #6651
08-22-2020 05:01:18
08-22-2020 05:01:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=h1) Report > Merging [#6659](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f94151dc7809128b40ab68ba164742fe1c5b4e6?el=desc) will **increase** coverage by `0.61%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6659/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6659 +/- ## ========================================== + Coverage 79.01% 79.62% +0.61% ========================================== Files 156 156 Lines 28248 28248 ========================================== + Hits 22320 22493 +173 + Misses 5928 5755 -173 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.30%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=footer). Last update [0f94151...2b5ac03](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,658
closed
wip: add from scratch arg to lightning_base
08-22-2020 04:55:22
08-22-2020 04:55:22
transformers
6,657
closed
Error while loading pretrained model with "return_dict=True"
# ❓ Questions & Help torch: 1.6.0+cu101 Transformers: 3.0.2 **Error with "return_dict=True"** ``` from transformers import BertTokenizer, BertForPreTraining import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForPreTraining.from_pretrained('bert-base-uncased', return_dict=True) ``` ``` TypeError Traceback (most recent call last) <ipython-input-3-5eca8cb45c88> in <module>() 2 import torch 3 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') ----> 4 model = BertForPreTraining.from_pretrained('bert-base-uncased', return_dict=True) /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 670 671 # Instantiate model. --> 672 model = cls(config, *model_args, **model_kwargs) 673 674 if state_dict is None and not from_tf: TypeError: __init__() got an unexpected keyword argument 'return_dict' ```
08-22-2020 00:29:08
08-22-2020 00:29:08
Hi! I believe that parameter is only available on `master` right now, so you should install `transformers` from the `master` branch to use it (`pip install git+https://github.com/huggingface/transformers`). It'll be available in version `3.1.0` which will be released in a couple of days.<|||||>Working for me after the upgrade to 3.1.0 - thanks @LysandreJik <|||||>> pip install git+https://github.com/huggingface/transformers Thanks, this helped me too. Seems like ```transformers``` latest version must be important , especially stable and master branch one
transformers
6,656
closed
Add bibtex for new paper
and link
08-21-2020 22:27:14
08-21-2020 22:27:14
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=h1) Report > Merging [#6656](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f94151dc7809128b40ab68ba164742fe1c5b4e6?el=desc) will **increase** coverage by `1.26%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6656/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6656 +/- ## ========================================== + Coverage 79.01% 80.27% +1.26% ========================================== Files 156 156 Lines 28248 28248 ========================================== + Hits 22320 22677 +357 + Misses 5928 5571 -357 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+1.62%)` | :arrow_up: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `97.08% <0.00%> (+19.70%)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `92.17% <0.00%> (+71.04%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=footer). Last update [0f94151...5d78b0b](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Just checking, there is no anonymity period, right?<|||||>no sasha tweeted it https://twitter.com/srush_nlp/status/1283433427212079104?s=20
transformers
6,655
closed
Error when wandb is installed
## System Summary Pop!_OS 20.04 Pytorch: 1.5.1 Transformers: 3.0.2 Tokenizers: 0.8.1rc1 Python: 3.7.6 Pretrained Model: GPT2 Pretrained Tokenizer: GPT2 ## Question Training without `wandb` works fine but when I `pip install wandb` and change nothing else in my code, whenever I go to run training I get the following error: ```python I0821 15:44:17.531560 46912496399424 file_utils.py:39] PyTorch version 1.5.1 available. I0821 15:44:21.471980 46912496399424 file_utils.py:55] TensorFlow version 2.0.0 available. Traceback (most recent call last): File "run_finetune_gpt2.py", line 5, in <module> from transformers import TrainingArguments, Trainer, GPT2Tokenizer File "/path/to/venv/my-venv/lib/python3.6/site-packages/transformers/__init__.py", line 158, in <module> from .trainer_utils import EvalPrediction, set_seed File "/path/to/venv/my-venv/lib/python3.6/site-packages/transformers/trainer_utils.py", line 11, in <module> import wandb File "/path/to/venv/my-venv/lib/python3.6/site-packages/wandb/__init__.py", line 60, in <module> from wandb.apis import InternalApi, PublicApi, CommError File "/path/to/venv/my-venv/lib/python3.6/site-packages/wandb/apis/__init__.py", line 116, in <module> from .public import Api as PublicApi File "/path/to/venv/my-venv/lib/python3.6/site-packages/wandb/apis/public.py", line 28, in <module> from wandb.summary import HTTPSummary File "/path/to/venv/my-venv/lib/python3.6/site-packages/wandb/summary.py", line 15, in <module> from wandb.meta import Meta File "/path/to/venv/my-venv/lib/python3.6/site-packages/wandb/meta.py", line 6, in <module> import pynvml File "/cm/local/apps/cuda/libs/current/pynvml/pynvml.py", line 1671 print c_count.value ^ SyntaxError: Missing parentheses in call to 'print'. Did you mean print(c_count.value)? ``` Any thoughts?
08-21-2020 21:20:53
08-21-2020 21:20:53
What version of wandb do you have? Also would you have a simple example to reproduce this issue?<|||||>It could be a problem with your virtual environment. Maybe this issue will help: https://github.com/wandb/client/issues/539<|||||>@borisdayma Thank you for your reply. I'm using wand 0.9.5. Here is some sample code: ```python from transformers import Trainer, TrainingArguments, GPT2LMHeadModel, GPT2Tokenizer import torch from torch.utils.data import Dataset class SDAbstractsDataset(Dataset): def __init__(self): prompt1 = 'We present an update on the results of the Double Chooz experiment. Double Chooz searches for the neutrino mixing angle, θ13, in the three-neutrino mixing matrix via the disappearance of produced by the dual 4.27 GW/th Chooz B Reactors. Here we discuss updated oscillation fit results using both the rate and the shape of the anti-neutrino energy spectrum. In the most recent oscillation analysis we included data with neutron captures on Gadolinium and Hydrogen along with the reactor off data that we collected. This is an important step in our multi-year program to establish the value of θ13.' prompt2 = 'The paper covers detailed discussion on novel control system developed for adaptive fluid-based shock-absorbers serving for mitigation of unknown impact excitations. In order to provide complete independence of the control system from the loading conditions, the Hybrid Prediction Control (HPC) was elaborated. The proposed method is an extension of previously introduced kinematic feedback control which ensures optimal path finding, tracking and path update in case of high disturbance or sudden change of loading conditions. Implementation of the presented control system allows to obtain self-adaptive fluid-based absorbers providing robust impact mitigation. In contrast to previously developed methods of Adaptive Impact Absorption, the proposed control strategy does not require prior knowledge of impact excitation or its preliminary identification. The independence of applied control system from parameters of impact loading results in the capability of automatic path correction in the case of disturbance occurrence and re-adaptation to a number of subsequent impacts. The successful operation of the self-adaptive system is investigated with the use of numerical examples involving double-chamber pneumatic shock-absorber equipped with controllable valve. Efficiency of the HPC is proved by comparison with passive absorber as well as device equipped with adaptive and optimal control modules.' prompt3 = 'This study aimed to produce biosurfactant from Pseudozyma tsukubaensis using cassava wastewater and an inoculum (biomass) for galactooligosaccharides synthesis from lactose as an integrated system. First, the use of cassava wastewater as a low cost culture medium by P. tsukubaensis to produce biomass and biosurfactant was evaluated and optimized. Then, the microbial cells (biomass) obtained from the optimized process were used to produce galactooligosaccharides from lactose. The optimum conditions for biosurfactant and biomass synthesis were found to be 80% (v/v) of cassava wastewater at 30°C and 200rpm for 48h. The highest concentration of biosurfactant, that is, minimum surface tension value and maximum biomass concentration predicted were experimentally confirmed as 26.87mN/m and 10.5g/L, respectively. The biosurfactant obtained showed good thermal (121°C/1h), pH (2–11) and ionic strength (0–25% NaCl) stability. Excellent emulsifier activity was also verified, suggesting a potential application in enhanced oil recovery. Galactooligosaccharides synthesized by the Kluyveromyces genus have been extensively investigated, however, few studies have reported transgalactosylation ability by other yeast genera. The transgalactosylation activity of the yeast biomass at optimized conditions from 40% (w/w) lactose resulted in galactooligosaccharides production of 73.12g/L and a yield of 18.28% (w/w) at pH 8.0 and 30°C in 24h. This research showed the technical feasibility of an integrated process: biosurfactant and GOS production from P. tsukubaensis, which takes advantage of the remarkable metabolism of this microorganism. To the best of our knowledge, this is the first study reporting the potential of P. tsukubaensis to produce two economical biotechnological products of increase interest as an integrated process.' prompt4 = 'Advantages of a fuzzy predictive control algorithm are discussed in the paper. The fuzzy predictive algorithm is a combination of a DMC (Dynamic Matrix Control) algorithm and Takagi–Sugeno fuzzy modeling, thus it inherits advantages of both techniques. The algorithm is numerically effective. It is in fact generalization of the standard DMC algorithm widely used in the industry, thus the existing implementations of the DMC algorithm can be extended using the presented fuzzy approach. A simple and easy to apply method of fuzzy predictive control algorithms synthesis is presented in the paper. It can be easy applied also in the case of Multiple Input Multiple Output (MIMO) control plants. Moreover, information about measured disturbance can be included in the algorithms in an easy way. The advantages of the fuzzy predictive control algorithm are demonstrated in the example control systems of two nonlinear chemical reactors: the first one—with inverse response and the second one—a MIMO plant with time delay.' self.data_list = [prompt1, prompt2, prompt3, prompt4] def __len__(self): return len(self.data_list) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() abstract_text = self.data_list[idx] return abstract_text def sd_data_collator(dataset_samples_list): tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') tokenizer.pad_token = tokenizer.eos_token encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) batch = {} batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']]) batch['past'] = None batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']]) batch['position_ids'] = None batch['head_mask'] = None batch['inputs_embeds'] = None batch['labels'] = None batch['use_cache'] = True return batch output_dir = 'YOUR_OUTPUT_DIR' logging_dir = 'YOUR_LOGGING_DIR' training_args = TrainingArguments( output_dir=output_dir, do_train=True, logging_dir=logging_dir, save_steps=50, per_device_train_batch_size=2 ) model = GPT2LMHeadModel.from_pretrained('gpt2') sd_dataset = SDAbstractsDataset() trainer = Trainer( model=model, args=training_args, train_dataset=sd_dataset, data_collator=sd_data_collator ) #trainer.train() ```<|||||>I ran it in colab: see [notebook](https://colab.research.google.com/gist/borisdayma/ccff2200853253a3dcbf39d1100a7da0/welcome-to-colaboratory.ipynb) There does not seem to be any issue related to wandb. Maybe try my previous link as it may be due to issues in your local environment.<|||||>@borisdayma that link solved my problem! Thanks for your help!
transformers
6,654
closed
prepare_seq2seq_batch makes labels/ decoder_input_ids made later.
`src/` changes: - when `tgt_texts` is supplied `prepare_seq_to_seq_batch` calls the tensor that used to be called `decoder_input_ids`, `labels`. - This change helps metrics for models whose tokenizers do not add bos to the beginning of target sequences, like Marian and Pegasus, without affecting metrics for other models (bart). This branch was originally called "Fairseq batch equivalence", because it makes batches that look identical to fairseq's for mbart (and bart). - tokenization testing file for bart. - lots of cleanup and testing. `examples/seq2seq` changes: - `examples/seq2seq/finetune.py` (and eventually Seq2SeqTrainer) makes decoder_input_ids by shifting tokens right. - this enables Marian finetuning and distillation, with a few extra changes. - add `--label_smoothing` option to seq2seq/distillation.py - rename `Seq2SeqDataset` -> `LegacySeq2SeqDataset` and `TranslationDataset`-> `Seq2SeqDataset`. The new `Seq2SeqDataset` calls `prepare_seq2seq_batch`. The choice of which dataset to use is determined based on whether the tokenizer has a `prepare_seq2seq_batch` method. **Problem:** Previously on master, if the target language sequence was "Șeful ONU declară că nu există soluții militare în Siria", and the tokenizer was Marian, lm_labels would become "ONU declară că nu există soluții militare în Siria", and the model would learn to skip the first token (or not generate bos). Generations would then start very strangely, for example: `", fostul şef al personalului prezidenţial din Brazilia, va participa la un proces"` now: `"Fostul şef al personalului prezidenţial al Braziliei va fi judecat".` (same thing is happening for pegasus #6711) ### Metrics **mbart en-> ro**: no change marian: master: 23 BLEU, this branch: 25 (en ro distillation/no teacher/3 dec layers) distilbart-cnn-12-3: no change (within 0.01 ROUGE 2) master + label smoothing: `{'rouge1': 43.2764, 'rouge2': 20.4969, 'rougeL': 29.9210}` this branch + label smoothing: `{"rouge1": 43.1997, "rouge2": 20.4879, "rougeL": 30.1607}` ### TODO: - check t5-base - check pegasus If you want to test whether this branch makes truncation go away, the easiest way is to pull the mirror branch with ```bash git fetch git checkout batch-parity-cleaner ``` cc @patil-suraj
08-21-2020 21:16:23
08-21-2020 21:16:23
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=h1) Report > Merging [#6654](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bd7be9a4268221d2a0000c7e8033aaeb365c03b?el=desc) will **decrease** coverage by `0.04%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6654/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6654 +/- ## ========================================== - Coverage 79.74% 79.70% -0.05% ========================================== Files 157 157 Lines 28479 28477 -2 ========================================== - Hits 22712 22697 -15 - Misses 5767 5780 +13 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.57% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <100.00%> (ø)` | | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <100.00%> (ø)` | | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.15% <100.00%> (+32.48%)` | :arrow_up: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `96.82% <100.00%> (+1.51%)` | :arrow_up: | | [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `95.23% <100.00%> (+49.92%)` | :arrow_up: | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.32% <100.00%> (ø)` | | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=footer). Last update [4bd7be9...08ddfd4](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Added common tokenizer tests @LysandreJik
transformers
6,653
closed
old nlp causes error that pip install -e. can't fix.
``` Traceback (most recent call last): File "finetune.py", line 15, in <module> from lightning_base import BaseTransformer, add_generic_args, generic_train File "/home/shleifer/transformers_fork/examples/lightning_base.py", line 10, in <mo dule> from transformers import ( File "/home/shleifer/transformers_fork/src/transformers/__init__.py", line 23, in < module> from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig File "/home/shleifer/transformers_fork/src/transformers/configuration_albert.py", line 18, in <module> from .configuration_utils import PretrainedConfig File "/home/shleifer/transformers_fork/src/transformers/configuration_utils.py", line 25, in <module> from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url File "/home/shleifer/transformers_fork/src/transformers/file_utils.py", line 68, in <module> import nlp # noqa: F401 File "/home/shleifer/miniconda/envs/torch1.5/lib/python3.8/site-packages/nlp/__init__.py", line 41, in <module> raise ImportWarning( ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`. ```
08-21-2020 20:52:14
08-21-2020 20:52:14
Fixed with `pip install nlp --upgrade`. This problem only happens if you do have nlp installed, but an old version. we could make assertions about` nlp.__version___` to avoid this.<|||||>I face this even now after running pip install on requirements.txt . I just upgrade the version for pyarrow and it works fine then.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,652
closed
['encoder.version', 'decoder.version'] are unexpected when loading a pretrained BART model
Using an example from the bart doc: https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration ``` from transformers import BartTokenizer, BartForConditionalGeneration tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') TXT = "My friends are <mask> but they eat too many carbs." model = BartForConditionalGeneration.from_pretrained('facebook/bart-large') input_ids = tokenizer([TXT], return_tensors='pt')['input_ids'] logits = model(input_ids)[0] masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() probs = logits[0, masked_index].softmax(dim=0) values, predictions = probs.topk(5) print(tokenizer.decode(predictions).split()) ``` gives: ``` Some weights of the model checkpoint at facebook/bart-large were not used when initializing BartForConditionalGeneration: ['encoder.version', 'decoder.version'] - This IS expected if you are initializing BartForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing BartForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). test:9: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1597302504919/work/torch/csrc/utils/python_arg_parser.cpp:864.) masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() ['good', 'great', 'all', 'really', 'very'] ``` well, there is one more issue of using a weird deprecated `nonzero()` invocation, which has to do with some strange undocumented requirement to pass the `as_tuple` arg, since pytorch 1.5 .https://github.com/pytorch/pytorch/issues/43425 we have `authorized_missing_keys`: `authorized_missing_keys = [r"final_logits_bias", r"encoder\.version", r"decoder\.version"]` https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L942 which correctly updates `missing_keys` - should there be also an `authorized_unexpected_keys` which would clean up `unexpected_keys`? (note: I re-edited this issue once I understood it better to save reader's time, the history is there if someone needs it) And found another variety of it: for `['model.encoder.version', 'model.decoder.version']` ``` tests/test_modeling_bart.py::BartModelIntegrationTests::test_mnli_inference Some weights of the model checkpoint at facebook/bart-large-mnli were not used when initializing BartForSequenceClassification: ['model.encoder.version', 'model.decoder.version'] - This IS expected if you are initializing BartForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing BartForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). PASSED ```
08-21-2020 19:41:07
08-21-2020 19:41:07
Yeah I think the clean solution is `authorized_extra_keys` but I could also just reconvert the models. We could also leave the warning. What do you think @sgugger ?<|||||>IMHO, that warning makes the library look somewhat amateurish, as it makes the user wonder whether something is wrong, for absolutely no reason. As I'm the one who is bothered - If I can be of help resolving this please don't hesitate to delegate this to me.<|||||>The cleanest would be to reconvert the models and remove the keys we don't need, I think. Adding the `authorized_extra_keys` works too, but then using it too much could have unexpected consequences resulting in bugs, so I'd only go down that road if there is no other option.<|||||>The simplest and cleanest way would probably to simply remove these two variables from the state dict, wouldn't it? If reconverting the checkpoint you should check that it is exactly the same as the previous one, which sounds like more of a pain and more error prone than simply doing ```py !wget https://cdn.huggingface.co/facebook/bart-large/pytorch_model.bin weights = torch.load('/path/to/pytorch_model.bin') del weights['encoder.version'] del weights['decoder.version'] torch.save(weights, 'new_pytorch_model.bin') ```<|||||>Done. Also converted weights to fp16.
transformers
6,651
closed
[doc] bart doc examples aren't for bart
If you look at the very end of this section https://huggingface.co/transformers/model_doc/bart.html#transformers.BartForConditionalGeneration.generate there are 5 examples of using `generate` none of which is for BART. Is this an accidental copy-n-paste issue and they should just be removed? There are examples of generate for BART in the pre-amble of the class: https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration
08-21-2020 19:36:31
08-21-2020 19:36:31
Those examples are from the `generate` method doc which is a generic method shared by all generative models.<|||||>Thank you, @patil-suraj. Indeed, since `BartForConditionalGeneration` uses super-super class's `generate` it ends up having that generic signature in its docs. What I'm trying to say is that these example are confusing to the user since not only they are irrelevant to someone trying to use BART, there isn't even an example of using bart in that part of the doc (there is one earlier in the class signature). e.g. they don't show up in other similar classes which have `generate` https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration https://huggingface.co/transformers/model_doc/marian.html but there they have their own `generate` methods, so this doesn't happen. I'm trying to flag a poor user experience and asking whether perhaps there is a better way to do it? One possible suggestion: - remove the 5 examples from `generate` at https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L215 - replace with a note - see this class' pre-amble documentation for examples.<|||||>feel free to send a PR deleting generate here https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/bart.rst#L35 <|||||>Thank you, Sam. https://github.com/huggingface/transformers/pull/6659
transformers
6,650
closed
Add model card for electricidad-base-generator
I works like a charm! Look at the output of the example code!
08-21-2020 17:05:23
08-21-2020 17:05:23
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=h1) Report > Merging [#6650](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e8c494da78077a91071a00ab2b73717deda24be?el=desc) will **increase** coverage by `0.43%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6650/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6650 +/- ## ========================================== + Coverage 79.20% 79.64% +0.43% ========================================== Files 156 156 Lines 28248 28248 ========================================== + Hits 22374 22497 +123 + Misses 5874 5751 -123 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: | | [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <0.00%> (+2.98%)` | :arrow_up: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.26% <0.00%> (+24.27%)` | :arrow_up: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `100.00% <0.00%> (+36.00%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `56.16% <0.00%> (+41.74%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=footer). Last update [9e8c494...ab682f0](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,649
closed
[Doc model summary] add MBart model summary
add model summary for MBart @sshleifer , @sgugger
08-21-2020 16:43:52
08-21-2020 16:43:52
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=h1) Report > Merging [#6649](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e8c494da78077a91071a00ab2b73717deda24be?el=desc) will **increase** coverage by `0.42%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6649/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6649 +/- ## ========================================== + Coverage 79.20% 79.62% +0.42% ========================================== Files 156 156 Lines 28248 28248 ========================================== + Hits 22374 22493 +119 + Misses 5874 5755 -119 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: | | [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <0.00%> (+2.98%)` | :arrow_up: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.26% <0.00%> (+24.27%)` | :arrow_up: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `100.00% <0.00%> (+36.00%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `56.16% <0.00%> (+41.74%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=footer). Last update [9e8c494...00c14dd](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@sshleifer applied the suggestions.<|||||>thx suraj!
transformers
6,648
closed
Remove hard-coded uses of float32 to fix mixed precision use
Remove hard-coded uses of float32 from the tensorflow implementation of BERT and ELECTRA. Fixes #3320
08-21-2020 16:37:22
08-21-2020 16:37:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=h1) Report > Merging [#6648](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e8c494da78077a91071a00ab2b73717deda24be?el=desc) will **increase** coverage by `1.07%`. > The diff coverage is `54.54%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6648/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6648 +/- ## ========================================== + Coverage 79.20% 80.27% +1.07% ========================================== Files 156 156 Lines 28248 28248 ========================================== + Hits 22374 22677 +303 + Misses 5874 5571 -303 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <16.66%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: | | [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <0.00%> (+2.98%)` | :arrow_up: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=footer). Last update [9e8c494...6ac59c3](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,647
closed
mbart broken in summarization pipeline
``` summarizer = pipeline("summarization", model="facebook/mbart-large-cc25", tokenizer="facebook/mbart-large-cc25") ```
08-21-2020 16:18:50
08-21-2020 16:18:50
works on master.
transformers
6,646
closed
Error when loading my trained model
Hello, I tried to train the question-answering model using `bert-base-uncased` on SQUADv1.1. The training process seems to be successfully completed. However, when I load the trained model, it said that `File "h5py/h5f.pyx", line 88, in h5py.h5f.open OSError: Unable to open file (file signature not found)` Here is my configuration for the training process: ``` model_name_or_path: bert-base-uncased do_train: True do_eval: True overwrite_output_dir: True num_train_epochs: 10 per_device_train_batch_size: 12 per_device_eval_batch_size: 12 warmup_steps: 100 weight_decay: 0.01 learning_rate: 3e-5 evaluate_during_training: True save_steps: 5000 ``` And here is what I stored in my model directory: ``` checkpoint-10000 checkpoint-35000 checkpoint-55000 pytorch_model.bin checkpoint-15000 checkpoint-40000 checkpoint-60000 special_tokens_map.json checkpoint-20000 checkpoint-45000 checkpoint-65000 tokenizer_config.json checkpoint-25000 checkpoint-5000 checkpoint-70000 training_args.bin checkpoint-30000 checkpoint-50000 config.json vocab.txt ``` I tried to load my model by ``` self._model = BertForQuestionAnswering.from_pretrained(`./model/trained_squad/`, from_tf=True) ``` It would be appreciated if anyone can give me a clue about what happens here. Is there anything wrong with my training process? Best, Yanchao
08-21-2020 16:03:02
08-21-2020 16:03:02
Does ``` self._model = BertForQuestionAnswering.from_pretrained(`./model/trained_squad/`) ``` work?<|||||>How silly I am! Thanks a lot. It works for me. <|||||>When I take out `from_tf=True` it then says `[Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index', 'flax_model.msgpack'] found in directory distilbert-somm or `from_tf` and `from_flax` set to False.]()`<|||||>amazingly the following worked tokenizer = AutoTokenizer.from_pretrained("...", use_auth_token="<key>", from_tf=True,) model = AutoModelForSequenceClassification.from_pretrained("...", from_tf=True, use_auth_token=True) i.e. the first use_auth_token requires the actual key, while the second has to be "True"
transformers
6,645
closed
New config param for cross-attention dimensionality
# 🚀 Feature request Add a new config param `n_cross` for each model that has cross-attention. This param will determine key and value matrices dimensionality in cross-attention. ## Motivation I have pretrained encoder (hidden size 512) and want to combine it with GPT-2 medium (hidden size 1024) by cross-attention. Now I cant do it, because in according with this PR https://github.com/huggingface/transformers/commit/1d6e71e1167dea9e026391ec5a1a2d7ec33d22af query and key matrices of cross-attention have dimensionality same as self-attention. In code it could looks like: ``` config.is_cross_attention = True config.n_cross = 512 ``` And in doc string: ``` n_inner (:obj:`int`, optional, defaults to None): Dimensionality of the inner feed-forward layers. :obj:`None` will set it to 4 times n_embd. n_cross (:obj:`int`, optional, defaults to None): Dimensionality of the cross-attention input. :obj:`None` will set it same as n_embd. ``` ## Your contribution Actually I have already fixed this in my fork, but I didn't make any tests and docs. But I would be able to prepare PR within next two weeks.
08-21-2020 15:30:37
08-21-2020 15:30:37
is this different than d_model/hidden_size?<|||||>Yes it is. As I undestand from T5 and TransformerXL config this param named `n_embd` in GPT2 config (embedding and hidden size dimensionality). It has impact on selt-attention dimentionality: Q (n_embd x n_embd) K (n_embd x n_embd) V (n_embd x n_embd). For now cross-attention dimensionality same, but I want: Q (n_embd x n_embd) K (n_cross x n_embd) V (n_cross x n_embd) in cross-attention.<|||||>Hey @Squire-tomsk, thanks for your issue. This can easily be resolved by introducing a new config parameter for each model that can be used as a decoder in the `EncoderDecoder` Framework, so probably in `configuration_utils.py`. By default it can be set to `d_model` or `hidden_size`, if not further specifiied. IMO, adding another config param for this case is OK. What do you think? @sshleifer, @sgugger, @LysandreJik ? <|||||>As long as the parameter is set to the right default and doesn't break backward compatibility, I have no objection.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi, This feature has recently been implemented in #13874 . We called the attribute `cross_attention_hidden_size`. In short, there are 2 options: - either you set it to `None`, in which case the `encoder_hidden_states` will be projected using a single linear layer, to match the hidden size of the decoder (in case they don't match). - either you set it to the size of the encoder, in which case the decoder will project the `encoder_hidden_states` to the same dimension as the decoder when creating `keys` and `values` in each cross-attention layer. This is the case for the recently added TrOCR model.
transformers
6,644
closed
Dataset and DataCollator for BERT Next Sentence Prediction (NSP) task
Add `DataCollatorForNextSencencePrediction` and `TextDatasetForNextSencencePrediction` to support mlm and next sentence prediction objectives together.
08-21-2020 14:46:34
08-21-2020 14:46:34
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=h1) Report > Merging [#6644](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/41aa2b4ef1b9c2d14d5b06af2e0faa10592779dd?el=desc) will **decrease** coverage by `0.27%`. > The diff coverage is `12.40%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6644/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6644 +/- ## ========================================== - Coverage 79.64% 79.36% -0.28% ========================================== Files 157 156 -1 Lines 28564 28384 -180 ========================================== - Hits 22750 22528 -222 - Misses 5814 5856 +42 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | | | [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `56.97% <10.81%> (-34.86%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `57.14% <12.12%> (-32.57%)` | :arrow_down: | | [src/transformers/data/datasets/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/benchmark/benchmark\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `65.03% <0.00%> (-0.49%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.26% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `81.88% <0.00%> (-0.29%)` | :arrow_down: | | ... and [133 more](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=footer). Last update [41aa2b4...ec89daf](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hey so I have a PR out for the same task: https://github.com/huggingface/transformers/pull/6376 I'm mostly just writing this comment so that I can keep track of what the reviewers have to say and what happens with the NSP task.<|||||>Hi, @sgugger ! I add dict inputs support like `DataCollatorForLanguageModeling` according to your suggestion, but now there is a conflict in `src/transformers/__init__.py`. Do I need to resolve it or leave it to you?<|||||>I can take care of the final merge once this is all good and @LysandreJik approved, it's due to a new version of isort.<|||||>Could we add a test for this? I just merged `master` in to make sure it has the latest changes.<|||||>After @LysandreJik merge the `master` branch, many files need to be reformatted. To clearly show the codes I modified, I did not include the changes caused by `make style` of other files in those commits, so `check_code_quality` will not pass.
transformers
6,643
closed
bert finetuning for multilingual question answering
Hi, I'm just learning BERT now, and wanna use pyTorch and apply BERT for question answering. I browsed the models, and it seems that run_squad.py is appropriate for question answering task. But I wanna apply it to other languages' question answering tasks, so what I can do is using run_squad.py and switching the train/dev file paths to other languages datasets? And what if I wanna apply the task to some small data(written in another language. so it will be actual and specific task), the process will be finetuning the "run_sqaud.py" with that language datasets, and then finetuning it again with the small data? Thank you,
08-21-2020 14:19:09
08-21-2020 14:19:09
Hi @haenvely , if you have the datset in same format as SQuAD then you can us the `run_squad` script. You can specify train and eval file using `--train_file` and `--predict_file` argument. But you'll need a model which pre-trained on the language that you want.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,642
closed
fix order of input/target of cross_entropy
https://pytorch.org/docs/stable/nn.functional.html#cross-entropy
08-21-2020 14:13:32
08-21-2020 14:13:32
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=h1) Report > Merging [#6642](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d0e42a7bed3de9271ae39c575d7eeb54cf985921?el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6642/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6642 +/- ## ======================================= Coverage 79.14% 79.14% ======================================= Files 156 156 Lines 28248 28248 ======================================= Hits 22358 22358 Misses 5890 5890 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.45% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=footer). Last update [d0e42a7...502f692](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>As far as I can see it, this snippet in the Readme is not working correctly.
transformers
6,641
closed
Dataset and DataCollator for BERT Next Sentence Prediction (NSP) task
Add `DataCollatorForNextSencencePrediction` and `TextDatasetForNextSencencePrediction` to support mlm and next sentence prediction objectives together.
08-21-2020 13:57:38
08-21-2020 13:57:38
transformers
6,640
closed
[Docs model summaries] Add pegasus to docs
Stiched together a short model summary for Pegasus. Would be great if @sshleifer and @sgugger can take a look :-)
08-21-2020 13:31:20
08-21-2020 13:31:20
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=h1) Report > Merging [#6640](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d0e42a7bed3de9271ae39c575d7eeb54cf985921?el=desc) will **decrease** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6640/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6640 +/- ## ========================================== - Coverage 79.14% 79.12% -0.03% ========================================== Files 156 156 Lines 28248 28248 ========================================== - Hits 22358 22351 -7 - Misses 5890 5897 +7 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-2.01%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=footer). Last update [d0e42a7...773081d](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I guess we could in general link the model summaries to the model doc as well or vice-versa for each model? <|||||>We also should probably automatically link each model card to its doc In the meantime, I think duplication is fine:)
transformers
6,639
closed
Run_glue.py, how can I continue previous fine-tuning training?
My previous training on the fine-tuning of glue's SST had been shut down, and now I want to continue my training on the base of the latest checkpoint, how can I implement this ? <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
08-21-2020 12:49:14
08-21-2020 12:49:14
If you're using the Trainer API, it's all automatic by running the same command as before.<|||||>Thanks a lot!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,638
closed
Running squad_convert_examples_to_features causes warnings.
Running function squad_convert_examples_to_features from data/processors/squad.py causes warning. Very annoying to disable warnings when running function many times :) Warned that tokenization_base_utils.py, max_len is depricated. `tokenization_utils_base.py:1320: FutureWarning: The max_len attribute has been deprecated and will be removed in a future version, use model_max_length instead.` Code to reproduce. ```python from transformers import AutoTokenizer from transformers.data import SquadExample, squad_convert_examples_to_features tokenizer = AutoTokenizer.from_pretrained('a-ware/roberta-large-squadv2') example = SquadExample(None,'what is test','test is good',None,None,None) features = squad_convert_examples_to_features( examples=[example], tokenizer=tokenizer, max_seq_length=512, doc_stride=128, max_query_length=64, is_training=False, tqdm_enabled=False ) ```
08-21-2020 11:09:19
08-21-2020 11:09:19
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Found this issue looking for the same problem. I'm using a `question_answering` pipeline (which call the `squad_convert_examples_to_features` under the hood) and it's really annoying when doing an information retrieval system to get the warning each time the QnA infrence is made... I tried to disable it with `logging.getLogger("transformers.tokenization_utils_base").setLevel(logging.ERROR)` without success. Do you have any clues how to disable it or to change the code so it's not deprecated anymore ? Thanks in advance<|||||>The version v4.0.0-rc-1 that will be released today or tomorrow will not have this warning anymore.<|||||>You can also disable this warning with ```python import warnings warnings.simplefilter("ignore") ```
transformers
6,637
closed
Add typing.overload for convert_ids_tokens
The annotation of `convert_ids_tokens` is not sufficient. When `ids` are `List[str]` and `str`, the return type are always `List[int]` and `int` respectively. It can be solved with [typing.overload](https://docs.python.org/3/library/typing.html#typing.overload)
08-21-2020 09:43:37
08-21-2020 09:43:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=h1) Report > Merging [#6637](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bdf7e5de92d76ff6dd7cee317ffa43bed8c5d233?el=desc) will **increase** coverage by `0.80%`. > The diff coverage is `71.42%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6637/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6637 +/- ## ========================================== + Coverage 79.47% 80.28% +0.80% ========================================== Files 156 156 Lines 28245 28251 +6 ========================================== + Hits 22448 22681 +233 + Misses 5797 5570 -227 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <71.42%> (-0.56%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.17% <0.00%> (-12.53%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.24% <0.00%> (-3.53%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=footer). Last update [bdf7e5d...0d86fc4](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>LGTM but will let others chime in<|||||>@LysandreJik What else should I do to merge this PR?
transformers
6,636
closed
Pre-training a language model on a large dataset
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> Hi, I'm getting a memory error when I run the example code for language modeling. I'm interested in pre-training a RoBERTa model using a 25GB text data on a virtual machine with a `v3-8` TPU on Google Cloud Platform. I'm using the following command with `transformers/examples/xla_spawn.py` and `transformers/examples/run_language_modeling.py`. ``` python xla_spawn.py --num_cores 8 \ run_language_modeling.py \ --output_dir=[*****] \ --config_name=[*****] \ --tokenizer_name=[*****] \ --do_train \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 128 \ --learning_rate 6e-4 \ --weight_decay 0.01 \ --adam_epsilon 1e-6 \ --adam_beta1 0.9 \ --adam_beta2 0.98 \ --max_steps 500_000 \ --warmup_steps 24_000 \ --save_total_limit 5 \ --save_steps=100_000 \ --block_size=512 \ --train_data_file=[*****] \ --mlm \ --line_by_line ``` When I run this, I get the following error. ``` 08/20/2020 15:21:07 - INFO - transformers.data.datasets.language_modeling - Creating features from dataset file at [*****] Traceback (most recent call last): File "xla_spawn.py", line 72, in <module> main() File "xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn start_method=start_method) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes while not context.join(): File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 108, in join (error_index, name) Exception: process 0 terminated with signal SIGKILL ``` It looks like the script gets killed while it's loading the training data [here](https://github.com/huggingface/transformers/blob/573bdb0a5d2897ff6c7520ebb38693c7acfbf17e/src/transformers/data/datasets/language_modeling.py#L89-L92). ```python with open(file_path, encoding="utf-8") as f: lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())] ``` When I run the above block of code separately with `transformers/examples/xla_spawn.py`, I get an error. ``` Traceback (most recent call last): File "xla_spawn.py", line 72, in <module> main() File "xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn start_method=start_method) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes while not context.join(): File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 108, in join (error_index, name) Exception: process 0 terminated with signal SIGKILL ``` When I run the above block of code separately using `n1-highmem-16 (16 vCPUs, 104 GB memory)` without TPU, I still get an error. ``` Traceback (most recent call last): File "debug_load.py", line 7, in <module> lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())] File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) MemoryError ``` Has anyone successfully reproduced the original RoBERTa model or pretrained a language model with a large dataset using Huggingface's transformers (with TPU)? If so, what are the specifications of your machine? Has this code (`transformers/examples/run_language_modeling.py`) tested on a large dataset? <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**: https://discuss.huggingface.co/t/pre-training-a-language-model-on-a-large-dataset/790
08-21-2020 08:56:56
08-21-2020 08:56:56
Maybe @LysandreJik can help here? <|||||>Same question...<|||||>Linked to https://github.com/huggingface/transformers/issues/6873<|||||>@go-inoue For large datasets, recommended to use mmap. I like Apache Arrow, which is also used in huggingface datasets library. Megatron LM also uses mmap, but with different implementations.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,635
closed
How to convert tokenizer output to train_dataset which is required by Trainer API
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I tried doing tokenisation using documentation of huggingface transformers. ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') encoded_input = tokenizer(batch_of_sequences) ``` Pre Trained Tokenizer gives output of dictionary containing three keys which are ``` encoded_input = { 'input_ids': [[],[],[]], 'token_type_ids': [[],[],[]], 'attention_mask': [[],[],[]] } ``` Trainer API requires input of Train & Eval Dataset of type `torch.utils.data.Dataset`. How can we use this output to create training dataset required for Trainer API? <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**: https://stackoverflow.com/questions/63519373/how-to-convert-tokenizer-output-to-train-dataset-which-is-required-by-trainer-ap
08-21-2020 08:43:30
08-21-2020 08:43:30
Hi @questpavan , This awesome [tutorial](https://huggingface.co/transformers/master/custom_datasets.html) walks you through how you can fine-tune transformer models using custom dataset. It also covers pre-processing and how to create dataset etc.<|||||>Thanks @patil-suraj. It helped a lot.
transformers
6,634
closed
Fix error class instantiation
The lines I fixed are bugs, causing `TypeError: 'ModuleNotFoundError' object is not callable`
08-21-2020 08:41:05
08-21-2020 08:41:05
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=h1) Report > Merging [#6634](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e5f452275b3d963bdff5b9c01346bef62032a150?el=desc) will **increase** coverage by `0.19%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6634/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6634 +/- ## ========================================== + Coverage 79.43% 79.62% +0.19% ========================================== Files 156 156 Lines 28245 28245 ========================================== + Hits 22436 22491 +55 + Misses 5809 5754 -55 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_bert\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `26.88% <ø> (ø)` | | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.01%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+6.20%)` | :arrow_up: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=footer). Last update [e5f4522...1431eff](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,633
closed
model card for Spanish electra base
08-21-2020 07:19:46
08-21-2020 07:19:46
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=h1) Report > Merging [#6633](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e5f452275b3d963bdff5b9c01346bef62032a150?el=desc) will **increase** coverage by `0.84%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6633/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6633 +/- ## ========================================== + Coverage 79.43% 80.28% +0.84% ========================================== Files 156 156 Lines 28245 28245 ========================================== + Hits 22436 22676 +240 + Misses 5809 5569 -240 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.50%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `95.00% <0.00%> (+13.33%)` | :arrow_up: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.16% <0.00%> (+32.50%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.82% <0.00%> (+34.35%)` | :arrow_up: | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=footer). Last update [e5f4522...017da63](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,632
closed
Error on `PreTrainedTokenizerBase.batch_encode_plus` with `return_overflowing_tokens=True, truncation=True`
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 (master branch) - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.8.1 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> tokenizers: @mfuntowicz ## Information Model I am using (Bert, XLNet ...): bert-base-uncased The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run the below code <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ```python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") tokenizer.batch_encode_plus( ["foo", "bar " * 1000], return_overflowing_tokens=True, truncation=True, padding=True ) ``` raises the following error: ``` Traceback (most recent call last): File "foo.py", line 4, in <module> tokenizer.batch_encode_plus( File "/Users/user/work/transformers/src/transformers/tokenization_utils_base.py", line 2121, in batch_encode_plus return self._batch_encode_plus( File "/Users/user/work/transformers/src/transformers/tokenization_utils.py", line 534, in _batch_encode_plus batch_outputs = self._batch_prepare_for_model( File "/Users/user/work/transformers/src/transformers/tokenization_utils.py", line 606, in _batch_prepare_for_model batch_outputs = self.pad( File "/Users/user/work/transformers/src/transformers/tokenization_utils_base.py", line 2305, in pad assert all( AssertionError: Some items in the output dictionnary have a different batch size than others. ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> No error
08-21-2020 06:01:20
08-21-2020 06:01:20
Try `padding=True` ?<|||||>@patil-suraj Same error occurs<|||||>@mfuntowicz the issue here is that one of the string returns an `overflowing_tokens` value (as it overflows) while the other doesn't. The resulting batch contains `overflowing_tokens` that have a single list, rather than two (one for each string). Here's a proposed fix https://github.com/huggingface/transformers/pull/6677.
transformers
6,631
closed
fine tuning with Chinese data LCQMC val_acc not increase
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> hello everyone when i use huggingface bert-base-chinese model fine tuning LCQMC dataset , the train_acc will raise normal like 0.6->0.8->0.9,but val_acc not increase, it alway same number like 0.56 do you konw what happent? <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
08-21-2020 02:07:12
08-21-2020 02:07:12
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,630
closed
Tokenize got an unexpected keyword argument 'pad_to_max_length', 'return_attention_mask'
This works fine when I run on my GPU but gives the above error when I try to run on my CPU. They bought have the same environment setup. Error: File "C:\ProgramData\Anaconda3\lib\site-packages\transformers\tokenization_utils.py", line 786, in encode_plus first_ids = get_input_ids(text) File "C:\ProgramData\Anaconda3\lib\site-packages\transformers\tokenization_utils.py", line 778, in get_input_ids return self.convert_tokens_to_ids(self.tokenize(text, **kwargs)) File "C:\ProgramData\Anaconda3\lib\site-packages\transformers\tokenization_utils.py", line 649, in tokenize tokenized_text = split_on_tokens(added_tokens, text) File "C:\ProgramData\Anaconda3\lib\site-packages\transformers\tokenization_utils.py", line 646, in split_on_tokens else [token] for token in tokenized_text), []) File "C:\ProgramData\Anaconda3\lib\site-packages\transformers\tokenization_utils.py", line 646, in <genexpr> else [token] for token in tokenized_text), []) TypeError: _tokenize() got an unexpected keyword argument 'pad_to_max_length' environment: Python 3.7 transformers 3.0.2 torch 1.5.1
08-21-2020 01:32:32
08-21-2020 01:32:32
You GPU and CPU env might have different version of transformers. Could you try updating to master ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,629
closed
Remove accidental comment
08-21-2020 01:19:46
08-21-2020 01:19:46
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=h1) Report > Merging [#6629](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e5f452275b3d963bdff5b9c01346bef62032a150?el=desc) will **decrease** coverage by `0.24%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6629/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6629 +/- ## ========================================== - Coverage 79.43% 79.18% -0.25% ========================================== Files 156 156 Lines 28245 28245 ========================================== - Hits 22436 22366 -70 - Misses 5809 5879 +70 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <ø> (ø)` | | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: | | [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `89.97% <0.00%> (-3.80%)` | :arrow_down: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `95.00% <0.00%> (+13.33%)` | :arrow_up: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <0.00%> (+39.06%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=footer). Last update [e5f4522...e054289](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,628
closed
PreTrainedModel's tie_weights invocation needs to be configurable
`PreTrainedModel` defines `tie_weights` method and then in [one place suggests](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L512) > Takes care of tying weights embeddings afterwards if the model class has a :obj:`tie_weights()` method. But since the super-class has it defined, it's always there. So the only way for a sub-class to avoid this "tying" is to override it with: ``` def tie_weights(self): pass ``` if nothing else happens that comment needs to be edited to suggest a noop override in the sub-class. But it took some hunting to get there, so a better solution is needed. Most likely, currently, most (all?) models in transformers with encoder/decoder share token embed weights, hence the issue didn't come up. I'm working on porting a fairseq transformer and there the enc/dec token embeds aren't shared. I propose a solution which adds a new param to `PretrainedConfig`, say: `is_enc_dec_sharing_embeds=True` and let the subclass override those, then add at the start of `tie_weights` in `modeling_utils.py` ``` def tie_weights(self): if not self.config.is_enc_dec_sharing_embeds: return ``` that way it's easy to quickly become aware that an action needs to be taken and set the desired behavior from within the subclass. Thoughts? If the proposed solution is agreeable, please, let me know which config param name it should be `is_enc_dec_sharing_embeds` - or different and I will submit a PR. Thank you. **edit:** OK, having had a closer look: ``` grep -r -A 2 'def tie_weights' src/transformers | grep pass | wc -l ``` we have 5 sub-classes that override it with a no-op so only some rely on the default. Bad superclass, no cookies for you.
08-21-2020 00:59:57
08-21-2020 00:59:57
Hey @stas00 - I think I agree with you here! I think this is not limited to Encoder Decoder models only, so I think a better configuration parameter would be `tie_word_embeddings`. `tie_word_embeddings` could be set to `True` by default and then set to `False` for the respective classes (such as Reformer, ....). What do you think? One thing, I'm wondering: For an encoder-decoder model, I think this variable should only apply to the decoder part (and tie its input and output word embeddings) and the encoder embeddings should be set equal to the decoder input embeddings by design in the `modeling_<model_name>.py` file (as it's done in `modeling_t5.py` for example). Overall, I agree these `def tie_weights(self): pass` are not great. Thanks for opening this issue. @sgugger, @LysandreJik, @sshleifer, @thomwolf - could you add your opinion here as well? <|||||>I think it's okay to control the tying in the init with a new param. For encoder/decoder models, I don't have enough experience with those to know the best default.<|||||>> Hey @stas00 - I think I agree with you here! I think this is not limited to Encoder Decoder models only, so I think a better configuration parameter would be `tie_word_embeddings`. `tie_word_embeddings` could be set to `True` by default and then set to `False` for the respective classes (such as Reformer, ....). What do you think? If you think this is a clear enough that works for me. And yes, `True` by default, since most current classes use it. And while at it, perhaps, rename the method `tie_weights` to `tie_word_embeddings` to match the config option? the current `tie_weights` method name is not descriptive enough to tell which weights it's about tie, IMHO. > One thing, I'm wondering: For an encoder-decoder model, I think this variable should only apply to the decoder part (and tie its input and output word embeddings) and the encoder embeddings should be set equal to the decoder input embeddings by design in the `modeling_<model_name>.py` file (as it's done in `modeling_t5.py` for example). I haven't delved into t5 yet, but the fairseq transformer, unlike most (all?) translators we currently have, has different input and output vocabs, and they are of different sizes, so you can't share the two. If I look at t5 it has shared embeds: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py#L659 I guess it's only the reformer that overrides `tie_weights` at the moment (all 5 matches are in `modeling_reformer`), but it does have a single vocab. So we have 2 different issues here: 1. tie in/out weights: yes/no 2. vocabs: shared/not shared But here we are looking at just issue 1. I haven't gotten yet to the juicy part yet, just trying to match the pretrained weights to the model and adjusting a copy of BART to the weights, I will be able to give a more intelligent follow up once I step through the whole process, and have a better understanding of what ties where. <|||||>This sounds like a good idea. I would advocate for a `tie_word_embeddings` parameter in the configuration as @patrickvonplaten suggested, but I would keep `tie_weights` as the method that does the weight tying rather than renaming that method as well. Just a quick glance at the configuration tells you which weights it's about to tie, and it will able to handle other cases of weight tying that we might encounter in the future without the need of adding additional new methods.<|||||>Awesome, I will open a PR. I actually need this feature for the `EncoderDecoderModel` as well.<|||||>Fixed in https://github.com/huggingface/transformers/pull/6692
transformers
6,627
closed
BartTokenizerFast cannot decode PyTorch tensors
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: MacOS and Linux - Python version: 3.6 and 3.7 - PyTorch version (GPU?): 1.6.0 (no and yes) - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> examples/seq2seq: @sshleifer (Discovered in #6610.) ## Information Model I am using (Bert, XLNet ...): Bart. Any Bart model (reproduced with distilbart-cnn-12-6 and distilbart-xsum-1-1. ## To reproduce Steps to reproduce the behavior: ```python In [1]: from transformers import BartTokenizerFast, BartForConditionalGeneration In [2]: model = BartForConditionalGeneration.from_pretrained("sshleifer/distilbart-xsum-1-1") In [3]: tokenizer = BartTokenizerFast.from_pretrained("sshleifer/distilbart-xsum-1-1") In [4]: input_ids = tokenizer("This is a test string.", return_tensors="pt") In [5]: input_ids Out[5]: {'input_ids': tensor([[ 0, 713, 16, 10, 1296, 6755, 4, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1]])} In [6]: summary_ids = model.generate(input_ids['input_ids'], num_beams=4, max_length=5, early_stopping=True) In [7]: print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-7-d476aca57720> in <module> ----> 1 print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) <ipython-input-7-d476aca57720> in <listcomp>(.0) ----> 1 print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) ~/.pyenv/versions/finetuning-bart/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces) 437 self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True 438 ) -> str: --> 439 text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) 440 441 if clean_up_tokenization_spaces: ~/.pyenv/versions/finetuning-bart/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py in decode(self, ids, skip_special_tokens) 265 raise ValueError("None input is not valid. Should be a list of integers.") 266 --> 267 return self._tokenizer.decode(ids, skip_special_tokens=skip_special_tokens) 268 269 def decode_batch( TypeError: In [8]: ``` ## Expected behavior Fast tokenizer should be able to decode without producing an error.
08-21-2020 00:37:04
08-21-2020 00:37:04
Faster snippet w same error ```python import torch tokenizer = BartTokenizerFast.from_pretrained("sshleifer/distilbart-xsum-1-1") ids = torch.tensor([1,2,3], dtype=torch.long) tokenizer.decode(ids) ```<|||||>Any update on this?<|||||>Hi, neither the slow or fast tokenizers can decode torch tensors. They can decode lists of Python integers, as it is stated in the [docs](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.decode).<|||||>@LysandreJik : That's not entirely accurate. This works: ```python from transformers import BartTokenizer tokenizer = BartTokenizer.from_pretrained("facebook/bart-large") import torch ids = torch.tensor([1,2,3], dtype=torch.long) tokenizer.decode(ids) ``` But this doesn't: ```python from transformers import BartTokenizerFast tokenizer = BartTokenizerFast.from_pretrained("facebook/bart-large") import torch ids = torch.tensor([1,2,3], dtype=torch.long) tokenizer.decode(ids) ``` I understand the docs say it should only decode lists, but slow tokenizers do also decode tensors.<|||||>The above @setu4993 @LysandreJik seems to give same code twice. You have to pass in a list of integers into the decode function. Official doc says decode function can process Torch.tensors, but it does not work well in all cases. Instead, give this a try ```python from transformers import BartTokenizerFast tokenizer = BartTokenizerFast.from_pretrained("facebook/bart-large") import torch ids = torch.tensor([1,2,3], dtype=torch.long) tokenizer.decode(ids.tolist()) ``` If tensor is [[...]], instead of [...], do ```python from transformers import BartTokenizerFast tokenizer = BartTokenizerFast.from_pretrained("facebook/bart-large") import torch ids = torch.tensor([1,2,3], dtype=torch.long) tokenizer.decode(*ids.tolist()) ```
transformers
6,626
closed
Specify config filename in HfArgumentParser
Currently, HfArgumentParser will load arguments from a config file if that config file is the same name as the script being run. So `train.py` would have a corresponding `train.args`. This extends the method to load from any config file that is specified. So `train.py` could use a `bert-large.args` or a `bert-small.args`.
08-20-2020 23:47:41
08-20-2020 23:47:41
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=h1) Report > Merging [#6626](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc820476a5c72060f810f825298befd5ec85da4d?el=desc) will **decrease** coverage by `0.11%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6626/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6626 +/- ## ========================================== - Coverage 79.98% 79.87% -0.12% ========================================== Files 153 153 Lines 28005 28007 +2 ========================================== - Hits 22401 22371 -30 - Misses 5604 5636 +32 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/hf\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/6626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `67.74% <0.00%> (-1.49%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (+1.50%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `87.73% <0.00%> (+63.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=footer). Last update [bc82047...cf515d1](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Looks like a nice addition, LGTM<|||||>Thanks for adding this!
transformers
6,625
closed
**Specifically for Pegasus-arxiv** - PegasusForConditionalGeneration - Error in loading state dictionary
Please note that this specifically happens only for **pegasus-arxiv** (Reopening Issue 6609). Before you close, please let me know if this works for pegasus-arxiv. Because I am getting the below error even after taking a fresh git fetch of transformers and removing all checkpoints from cache. ## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-1034-azure-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): N/A - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help @sshleifer ## Information Model I am using (Bert, XLNet ...): **google/pegasus-arxiv** ## To reproduce ```ruby mname = "google/pegasus-arxiv" model = PegasusForConditionalGeneration.from_pretrained(mname) ``` throws error as below File "/anaconda/envs/py37_default/lib/python3.7/site-packages/transformers/modeling_utils.py", line 894, in from_pretrained model.class.name, "\n\t".join(error_msgs) RuntimeError: Error(s) in loading state_dict for PegasusForConditionalGeneration: size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
08-20-2020 22:11:05
08-20-2020 22:11:05
works for me with ``` - Python version: 3.7.4 - PyTorch version (GPU?): 1.5.1 (True) ``` Try ``` mname = "google/pegasus-arxiv" model = PegasusForConditionalGeneration.from_pretrained(mname, force_download=True) ``` <|||||>> works for me with > > ``` > - Python version: 3.7.4 > - PyTorch version (GPU?): 1.5.1 (True) > ``` > > Try > > ``` > mname = "google/pegasus-arxiv" > model = PegasusForConditionalGeneration.from_pretrained(mname, force_download=True) > ``` Ok. It works now. Even though I did not use force_download, it downloaded a fresh copy of checkpoint. This can be closed.
transformers
6,624
closed
Bart: make decoder_input_ids correctly if labels specified.
should call shift_tokens_right(labels), like T5. cc @patil-suraj: does that make sense?
08-20-2020 18:52:22
08-20-2020 18:52:22
Yes, definitely. Better to handle it in the model. Since lots of recent issues were due to incorrect or not shifting labels<|||||>But then `prepare_seq2seq_batch` should return `labels` instead of `decoder_input_ids`<|||||>yep. I'll take a stab.
transformers
6,623
closed
Fix confusing warnings during TF2 import from PyTorch
1. Swapped missing_keys and unexpected_keys. 2. Copy&paste error caused these warnings to say "from TF 2.0" when it's actually "from PyTorch".
08-20-2020 18:43:51
08-20-2020 18:43:51
This fixes the confusing warnings mentioned in #5588<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=h1) Report > Merging [#6623](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/86c07e634f3624cdf3f9e4e81ca53b808c4b22c6?el=desc) will **decrease** coverage by `0.84%`. > The diff coverage is `66.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6623/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6623 +/- ## ========================================== - Coverage 80.03% 79.18% -0.85% ========================================== Files 156 156 Lines 28217 28217 ========================================== - Hits 22584 22345 -239 - Misses 5633 5872 +239 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <66.66%> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `14.42% <0.00%> (-42.89%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.00% <0.00%> (-36.00%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.99% <0.00%> (-24.28%)` | :arrow_down: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: | | [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.44% <0.00%> (-0.14%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=footer). Last update [86c07e6...15f3e23](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hello thanks for the PR! LGTM. Asking @LysandreJik to review.
transformers
6,622
closed
Move threshold up for flaky test with Electra
As discussed on slack, this should take care of the Electra flaky test with PT/TF equivalence. @LysandreJik just a ping for when you're back so you're aware.
08-20-2020 17:37:58
08-20-2020 17:37:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=h1) Report > Merging [#6622](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c9454507cf57d38fd863c2544300c88583fc60e3?el=desc) will **increase** coverage by `0.68%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6622/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6622 +/- ## ========================================== + Coverage 79.01% 79.69% +0.68% ========================================== Files 156 156 Lines 28217 28217 ========================================== + Hits 22295 22487 +192 + Misses 5922 5730 -192 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `83.42% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.30%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=footer). Last update [9539583...ef1053f](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Great, thanks for taking care of it!
transformers
6,621
closed
[Tests] fix attention masks in Tests
This PR should fix the flaky test failures of `test_modeling_output_equivalence` and `test_feed_forward_chunking`. I added a new random attention_mask generation function that makes sure that at least one token is attended to.
08-20-2020 16:39:54
08-20-2020 16:39:54
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=h1) Report > Merging [#6621](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/573bdb0a5d2897ff6c7520ebb38693c7acfbf17e?el=desc) will **increase** coverage by `0.85%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6621/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6621 +/- ## ========================================== + Coverage 79.16% 80.02% +0.85% ========================================== Files 156 156 Lines 28217 28217 ========================================== + Hits 22339 22581 +242 + Misses 5878 5636 -242 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-74.32%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.18% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+2.60%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.75%)` | :arrow_up: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=footer). Last update [573bdb0...d61cbf8](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,620
closed
Pegasus: OSError: Unable to load weights from pytorch checkpoint file.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sshleifer ## Information Model I am using (Bert, XLNet ...): google/pegasus-cnn_dailymail The problem arises when using: ``` import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' model_name = 'google/pegasus-cnn_dailymail' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) ``` Traceback: ``` RuntimeError Traceback (most recent call last) ~/projects/transformers/src/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 854 try: --> 855 state_dict = torch.load(resolved_archive_file, map_location="cpu") 856 except Exception: ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args) 584 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) --> 585 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) 586 ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args) 771 assert key in deserialized_objects --> 772 deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly) 773 if offset is not None: RuntimeError: unexpected EOF, expected 10498989 more bytes. The file might be corrupted. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-1-1ae6eb884edd> in <module> 7 model_name = 'google/pegasus-cnn_dailymail' 8 tokenizer = PegasusTokenizer.from_pretrained(model_name) ----> 9 model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) ~/projects/transformers/src/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 855 state_dict = torch.load(resolved_archive_file, map_location="cpu") 856 except Exception: --> 857 raise OSError( 858 "Unable to load weights from pytorch checkpoint file. " 859 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. " OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ```
08-20-2020 16:23:36
08-20-2020 16:23:36
works for me in in torch 1.5.1. and torch 1.6. Maybe this is a one off s3 failure? Can anybody else replicate? ``` from transformers import PegasusForConditionalGeneration model = PegasusForConditionalGeneration.from_pretrained(model_name) ```<|||||>I set ```force_download=True``` and it worked. Thanks!<|||||>> I set `force_download=True` and it worked. Thanks! can you describe in detail how did you solved the problem <|||||>Just upgrading the PyTorch and TensorFlow version solved the problem for me. <|||||>torch==1.6.0 tensorflow==2.3.1 transformers==3.5.1 And I'm trying to load my model trained on gpt2-small named train-on-test1. But I get the an OSERROR: Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' OSError: Unable to load weights from pytorch checkpoint file for '/mounted/models/train-on-test1/' at '/mounted/models/train-on-test1/pytorch_model.bin' If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. <|||||>I'm getting the same using torch == 1.7.0 and transformers == 4.1.1 and a xlnet localy downloaded model : ``` from transformers import XLNetForSequenceClassification model = XLNetForSequenceClassification.from_pretrained('/../../models/xlnet/', num_labels = 3) OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ```<|||||>I keep getting the same error too. I have pretrained a distilbertmodel named amazon-distilbert. When I am trying to load it using from pretrained it is throwing the same error. ``` from transformers import DistilBertTokenizerFast, DistilBertForSequenceClassification tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-cased"); model = DistilBertForSequenceClassification.from_pretrained("../models/amazon-distilbert") ``` And the error ``` f"Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' " OSError: Unable to load weights from pytorch checkpoint file for '/GitHub/TextSentimentAnalysis/models/amazon-distilbert' at '//GitHub/TextSentimentAnalysis/models/amazon-distilbert/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ```<|||||>same error with torch 1.5.0, TensorFlow 2.4.1, and transformers 4.2.2. Does anyone know how to solve such a problem?<|||||>Same error with torch 1.8.1, transformers 4.5.1 when trying to call ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM AutoModelForSeq2SeqLM.from_pretrained('Helsinki-NLP/opus-mt-ja-en').save_pretrained('pretrained_models/opus-mt-ja-en') ```<|||||>can anyone share how to solve this error to me? plz , i have met this problem too<|||||>Same issue here ..<|||||>> > I set `force_download=True` and it worked. Thanks! > > can you describe in detail how did you solved the problem In the **from_pretrained** function, set one of the parameters as **force_download=True** Eg. - model = LayoutLMForTokenClassification.from_pretrained("microsoft/layoutlm-base-uncased", num_labels=num_labels, **force_download=True**)<|||||>In my case, I was running on a cpu only compute, and this issue got solved when installing a cpu version of PyTorch. For example: http://download.pytorch.org/whl/cpu/torch-1.13.0%2Bcpu-cp39-cp39-linux_x86_64.whl
transformers
6,619
closed
[DistilBert] Flaky tests
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-61-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyTorch version (GPU?): 1.6.0+cpu (False) - Tensorflow version (GPU?): 2.1.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Reproduce Error: ```python #!/usr/bin/env python3 import torch from transformers import DistilBertModel, DistilBertConfig input_ids = torch.tensor([[55, 40, 88, 37, 12, 6, 20], [33, 87, 56, 6, 34, 92, 2], [ 4, 25, 95, 19, 9, 14, 80], [96, 45, 71, 10, 78, 33, 68], [72, 40, 59, 90, 5, 78, 44], [36, 15, 11, 18, 74, 40, 30], [84, 25, 5, 61, 18, 77, 35], [70, 87, 9, 42, 24, 65, 11], [28, 0, 28, 45, 92, 83, 96], [75, 41, 69, 61, 83, 31, 81], [94, 93, 79, 48, 24, 17, 9], [97, 5, 38, 94, 75, 8, 59], [31, 71, 87, 39, 97, 10, 22]]) attention_mask = torch.tensor([[1, 1, 1, 1, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1], [0, 1, 1, 1, 0, 0, 1], [1, 1, 1, 0, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1], [0, 0, 1, 0, 0, 1, 1], [1, 0, 1, 0, 0, 0, 1], [0, 1, 0, 1, 1, 1, 0], [0, 1, 1, 0, 0, 0, 0], [0, 1, 0, 1, 1, 0, 1], [0, 1, 1, 1, 1, 1, 1], [0, 1, 0, 1, 0, 0, 0]]) distil_bert_config = { "activation": "gelu", "attention_dropout": 0.1, "dim": 32, "dropout": 0.1, "hidden_act": "gelu", "hidden_dim": 37, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 4, "n_layers": 5, "pad_token_id": 0, "qa_dropout": 0.1, "return_dict": True, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": False, "vocab_size": 99 } config = DistilBertConfig(**distil_bert_config) torch.manual_seed(0) model = DistilBertModel(config).eval() last_hidden_state = model(input_ids, attention_mask=attention_mask)[0] if torch.isnan(last_hidden_state).any().item(): print("Error with DistilBert") ``` This code example allows yields nan values.
08-20-2020 16:07:40
08-20-2020 16:07:40
Will investigate now @sgugger @VictorSanh . My first guess is that it because of "inf" values because of masking because this error does not happen when the attention mask is not passed to forward.
transformers
6,618
closed
TFTrainer dataset doc & fix evaluation bug
discussed in #6551
08-20-2020 15:15:36
08-20-2020 15:15:36
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=h1) Report > Merging [#6618](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/039d8d65fc19ac74a8c7917233eb2828c46c0fa7?el=desc) will **decrease** coverage by `0.93%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6618/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6618 +/- ## ========================================== - Coverage 79.79% 78.86% -0.94% ========================================== Files 156 156 Lines 28213 28213 ========================================== - Hits 22513 22250 -263 - Misses 5700 5963 +263 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (ø)` | | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.99% <0.00%> (-1.31%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=footer). Last update [039d8d6...76afd14](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,617
closed
unk handling in v3.0 different than v2.0?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I have this problem in runing my code in transformers3.0.2, Trainable parameters: 102274668 248 250 252 252 248 250 252 252 Traceback (most recent call last): File "train.py", line 92, in <module> run('./configs/base_config.json') File "train.py", line 88, in run main(config) File "train.py", line 66, in main trainer.train() File "/DATA2/disk1/wangbingchen/project/ccks2020-task8-pytorch/base/base_trainer.py", line 67, in train result = self._train_epoch(epoch) File "/DATA2/disk1/wangbingchen/project/ccks2020-task8-pytorch/trainer/trainer.py", line 59, in _train_epoch for batch_idx, batch_data in enumerate(self.train_iter): File "/home/wangbingchen/wangbingchen/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/wangbingchen/wangbingchen/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/wangbingchen/wangbingchen/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/DATA2/disk1/wangbingchen/project/ccks2020-task8-pytorch/data_process/military_data_process.py", line 179, in collate_fn text_token_ids = torch.LongTensor(np.array(text_token_ids)) TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool. this is the problem of embedding length.But when I run this code in transformers2.11,everything is well.So,I want to ask the difference of transformers2.11 and transformers3.0.2.Thanks very much! <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
08-20-2020 14:27:33
08-20-2020 14:27:33
Hey @BCWang93, can you post a reproducible code snippet? It would be great if we could just copy paste your code and see the same error you post here :-) <|||||>> Hey @BCWang93, can you post a reproducible code snippet? It would be great if we could just copy paste your code and see the same error you post here :-) Because this is an entire project, I can't paste the full code. But I found the difference between transformers 3 and 2. Transformers2 maps ID to "[unk]" when dealing with characters like '\n','\r' et.al, but transformers3 discards all such characters when dealing with such cases. Like this: ![image](https://user-images.githubusercontent.com/31853251/90902807-9db0d980-e3ff-11ea-8ebf-4abf3d97b732.png) Thanks very much!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,616
closed
Fine tune masked language model on custom dataset 'index out of range in self'
# ❓ Questions & Help Hi I am training a mlm model with reference of this tutorial, https://huggingface.co/blog/how-to-train. However,I got the error of 'index out of range in self' but I already set max_length as well as block size in my code. I am also not clear how to load and prepare data, should I mask certain words by myself? and pass the masked label while training by myself? Because herehttps://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm I see '– Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size]' I am not sure using Trainer and DataCollatorForLanguageModeling could solve this. ## Details <!-- Description of your issue --> To reprpduce: ``` tokenizer = BertTokenizer.from_pretrained('./bert-large-cased',truncation = True, padding=True, max_length=100) model = BertForMaskedLM.from_pretrained( "./bert-large-cased", output_attentions = False, output_hidden_states = True ) device = 'cuda' if torch.cuda.is_available() else 'cpu' # Tell pytorch to run this model on the GPU. model = model.to(device) model.train() from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer,mlm=True, mlm_probability=0.15 #I don't know if this is the only way to set up mask for mlm task.... ) from transformers import LineByLineTextDataset train_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="./na_en_train.txt", #I reorgnasized data to line by line form as the tutorial, which is stupid. but I also tried TensorDataset, it got errors block_size=100, ) eval_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="./na_en_test.txt", block_size=100, ) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./results", overwrite_output_dir=True, num_train_epochs=3, per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=16, save_steps=10000, save_total_limit=2 ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, prediction_loss_only=True, ) trainer.train() ``` #Error Message: --------------------------------------------------------------------------- > IndexError Traceback (most recent call last) > <ipython-input-116-3435b262f1ae> in <module> > ----> 1 trainer.train() > > /opt/conda/envs/rapids/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path) > 497 continue > 498 > --> 499 tr_loss += self._training_step(model, inputs, optimizer) > 500 > 501 if (step + 1) % self.args.gradient_accumulation_steps == 0 or ( > > /opt/conda/envs/rapids/lib/python3.6/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer) > 620 inputs["mems"] = self._past > 621 > --> 622 outputs = model(**inputs) > 623 loss = outputs[0] # model outputs are always tuple in transformers (see doc) > 624 > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) > 548 .. note:: > 549 This method modifies the module in-place. > --> 550 > 551 Args: > 552 device (:class:`torch.device`): the desired device of the parameters > > /opt/conda/envs/rapids/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, **kwargs) > 1081 encoder_attention_mask=encoder_attention_mask, > 1082 output_attentions=output_attentions, > -> 1083 output_hidden_states=output_hidden_states, > 1084 ) > 1085 > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) > 548 .. note:: > 549 This method modifies the module in-place. > --> 550 > 551 Args: > 552 device (:class:`torch.device`): the desired device of the parameters > > /opt/conda/envs/rapids/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states) > 751 > 752 embedding_output = self.embeddings( > --> 753 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds > 754 ) > 755 encoder_outputs = self.encoder( > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) > 548 .. note:: > 549 This method modifies the module in-place. > --> 550 > 551 Args: > 552 device (:class:`torch.device`): the desired device of the parameters > > /opt/conda/envs/rapids/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) > 176 > 177 if inputs_embeds is None: > --> 178 inputs_embeds = self.word_embeddings(input_ids) > 179 position_embeddings = self.position_embeddings(position_ids) > 180 token_type_embeddings = self.token_type_embeddings(token_type_ids) > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) > 548 .. note:: > 549 This method modifies the module in-place. > --> 550 > 551 Args: > 552 device (:class:`torch.device`): the desired device of the parameters > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input) > 112 assert list(_weight.shape) == [num_embeddings, embedding_dim], \ > 113 'Shape of weight does not match num_embeddings and embedding_dim' > --> 114 self.weight = Parameter(_weight) > 115 self.sparse = sparse > 116 > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) > 1722 if not torch.jit.is_scripting(): > 1723 if type(input) is not Tensor and has_torch_function((input,)): > -> 1724 return handle_torch_function(hardswish, (input,), input, inplace=inplace) > 1725 if inplace: > 1726 return torch._C._nn.hardswish_(input) > > IndexError: index out of range in self > **A link to original question on the forum/Stack Overflow**: https://discuss.huggingface.co/t/fine-tune-masked-language-model-on-custom-dataset/747
08-20-2020 13:57:37
08-20-2020 13:57:37
same question<|||||>same issues coming for me as well<|||||>same issue<|||||>Does this still happen in the latest transformers version? Could you put the output of `transformers-cli` env here? Thanks.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>Reopening. Still having same issue?
transformers
6,615
closed
I can't reproduce the results of tf-xlm-r-ner-40-lang model
I tired to reproduce the results of tf-xlm-r-ner-40-lang model but there were compatibility issues in the token-classification/run_tf_ner.py I fixed some of these issue but it's still not running as expected, @jplu could you please share with me the transformers/tf version used to produce the results of tf-xlm-r-ner-40-lang model. i'm using: transformers = 3.0.2 tensorflow = 2.3.0 Thanks in advance.
08-20-2020 12:37:37
08-20-2020 12:37:37
Hello! What is the command line you use to train the model?<|||||>Hello @jplu that's the command line I tried to use to train the model (you added that command to the model description on huggingface community models) ``` cd examples/ner python run_tf_ner.py \ --data_dir . \ --labels ./labels.txt \ --model_name_or_path jplu/tf-xlm-roberta-base \ --output_dir model \ --max-seq-length 128 \ --num_train_epochs 2 \ --per_gpu_train_batch_size 16 \ --per_gpu_eval_batch_size 32 \ --do_train \ --do_eval \ --logging_dir logs \ --mode token-classification \ --evaluate_during_training \ --optimizer_name adamw ```<|||||>Can you use the last version of the trainer, with th ecommand line I used: ``` python run_tf_ner.py \ --data_dir . \ --labels ./labels.txt \ --model_name_or_path jplu/tf-xlm-roberta-base \ --output_dir model \ --num_train_epochs 8 \ --per_gpu_train_batch_size 32 \ --per_gpu_eval_batch_size 64 \ --do_train \ --do_eval \ --do_predict \ --logging_steps 10 \ --evaluate_during_training \ --save_steps 100 \ --overwrite_output_dir ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,614
closed
removed redundant arg in prepare_inputs
I am not sure why `model` was being passed in `_prepare_inputs`. It seemed redundant.
08-20-2020 09:08:51
08-20-2020 09:08:51
You may be right but maybe there is a reason? @sgugger <|||||>I'm guessing it was used at some point and then I forgot to remove it when it wasn't used anymore. Thanks for fixing!
transformers
6,613
closed
FillMaskPipeline return special tokens i.e. <mask> as prediction
# ❓ FillMaskPipeline return special tokens i.e. \<mask\> as prediction ## Details Im training a new language model from scratch using ByteLevelBPETokenizer for tokenizer and RobertaForMaskedLM for self-supervised language model. The config is as follow: Tokenizer: ``` tokenizer = ByteLevelBPETokenizer() tokenizer.train(files=paths, vocab_size=100000, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) ``` After the tokenizer was trained, it was loaded using `tokenizer = RobertaTokenizerFast.from_pretrained("./BERTmese/Tokenizer", max_length=512)` And the RobertaForMaskedLM is configed as follow: ``` config = RobertaConfig( vocab_size=100000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, ) model = RobertaForMaskedLM(config=config) ``` Then I train the RobertaForMaskedLM model using custom dataset and Adam optimizer as follow: ``` class CustomTextDataset(Dataset): def __init__(self, tokenizer, dataset, max_length): self.tokenizer = tokenizer self.dataset = dataset self.max_length = max_length def __len__(self): return len(self.dataset) def __getitem__(self, i) -> torch.Tensor: encoding = self.tokenizer(self.dataset[i]['text'], add_special_tokens=True, max_length=self.max_length, padding='max_length', truncation=True) return torch.tensor(encoding["input_ids"], dtype=torch.long) ``` ``` for step in data_loader: input_ids = next(iter(train_loader)) input_ids = input_ids.to(device) outputs = model(input_ids, labels=input_ids) loss, _ = outputs[:2] loss.backward() ... ``` For every 100 steps, I test the trained model using this code snippet: ``` model.eval() # test on a sample text fill_mask = FillMaskPipeline( model=model, tokenizer=tokenizer, topk=1 ) mask = fill_mask("ယခုလတွင်ပျားရည်နှင့်ပျားဖယောင်းများကိုစုဆောင်း<mask>သည်ဟုခန့်မှန်းနိုင်သည်။")[0] print(mask['token'], decoder.decode([mask['token_str']]), mask['score']) # set model back to train mode model.train() ``` In the first hundred steps, the predict token is fined (it predict some Burmese word). However, after that, the model start to predict the '\<mask\>' token as the result. Im not sure if predicting \<mask\> token for masked language is normal?
08-20-2020 07:24:12
08-20-2020 07:24:12
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,612
closed
wip/mbart: make batches that are identical to fairseq
This seems to + 0.2 BLEU
08-20-2020 02:41:47
08-20-2020 02:41:47
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=h1) Report > Merging [#6612](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/18ca0e91402d17950b870d7c9f67ddb7fd573817&el=desc) will **decrease** coverage by `0.61%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6612/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6612 +/- ## ========================================== - Coverage 79.89% 79.28% -0.62% ========================================== Files 156 156 Lines 28213 28219 +6 ========================================== - Hits 22542 22374 -168 - Misses 5671 5845 +174 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `97.14% <100.00%> (+1.83%)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.33%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+6.20%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=footer). Last update [18ca0e9...2207e5d](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,611
closed
TextGenerationPipeline giving FutureWarning about AutoModelWithLMHead
@TevenLeScao ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information Model I am using (Bert, XLNet ...): GPT2LMHeadModel The TextGenerationPipeline object causes FutureWarning about class `AutoModelWithLMHead` being deprecated ``` from transformers import pipeline text_generator = pipeline("text-generation", model="pranavpsv/gpt2-genre-story-generator") ``` ## To reproduce Steps to reproduce the behavior: Run the above script. This is the warning: ``` /usr/local/lib/python3.6/dist-packages/transformers/modeling_auto.py:798: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models. FutureWarning, ``` ## Expected behavior I expected no warning since I thought the TextGenerationPipeline would use AutoModelForCausalLM object or GPT2LMHeadModel object for the model.
08-20-2020 00:37:02
08-20-2020 00:37:02
Hey @pranavpsv, Yes, I think we can switch the `TextGenerationPipeline` to `AutoModelForCausalLM` and `AutoModelForSeq2SeqLM`. So check if the model is in one of the above and then use it, instead of using `AutoModelWithLMHead`. Also once we have a `ConditionalTextGenerationPipeline` we can remove the `AutoModelForSeq2SeqLM` dependency. Feel free to open a PR about this :-) <|||||>@patrickvonplaten Got it, thank you for the info :) My model is just a GPT2 model, so I believe it should use AutoModelForCausalLM. I just checked on the transformers master branch in the pipelines.py source code [file](https://github.com/huggingface/transformers/blob/d0e42a7bed3de9271ae39c575d7eeb54cf985921/src/transformers/pipelines.py#L2432). It shows that the pipelines.py file from master branch doesn't use AutoModelWithLMHead. However, for some reason, when pip installing the latest version of transformers, the Pipeline object (with task as text-generation) still gives the AutoModelWithLMHead deprecated warning (indicating that it might be importing AutoModelWithLMHead). To confirm, I found that the installed transformers (3.0.2) pipelines.py file imports AutoModelWithLMHead (which is what could be causing this warning): ``` # The pipelines.py file if is_torch_available(): import torch from .modeling_auto import ( AutoModel, AutoModelForSequenceClassification, AutoModelForQuestionAnswering, AutoModelForTokenClassification, AutoModelWithLMHead, AutoModelForSeq2SeqLM, ) ``` There seems to be a discrepancy between master branch and the 3.0.2 release pipelines object. For now, I'm doing the following to avoid the warning. ``` model = GPT2LMHeadModel.from_pretrained(checkpoint) tokenizer = AutoTokenizer.from_pretrained(checkpoint) text_generator = TextGenerationPipeline(model=model, tokenizer=tokenizer) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,610
closed
[seq2seq Example] Convert tensor to List[int] for decoding
Ran into an error today while using `finetune.py` where decoding kept failing because the output was a `torch.Tensor` instead of a `List[int]`. This fixes that.
08-19-2020 23:58:39
08-19-2020 23:58:39
what are your versions? This doesn't break for me. Try running ``` transformers-cli env ```<|||||>The latest version from PyPI, v3.0.2. It doesn't happen while running locally but does break on remote executions, though. Looking through the documentation, it made sense to me that an error would pop up since [`.generate()`](https://huggingface.co/transformers/model_doc/bart.html#transformers.BartForConditionalGeneration.generate) outputs a `torch.Tensor`, but [`tokenizer.decode()`](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.decode) (and batch_decode) expect `List[int]`<|||||>PyTorch 1.6.0 and Python 3.6, if that helps.<|||||>Can I see your traceback? You can loop over tensors just like lists.<|||||>I have a custom `summarization_trainer.py` in there that calls the `main` from `finetune.py`. Traceback: ``` File "summarization_trainer.py", line 157, in main return finetune_main(args, model) File "/opt/ml/code/finetune.py", line 460, in main logger=logger, File "/opt/ml/code/lightning_base.py", line 448, in generic_train trainer.fit(model) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1044, in fit results = self.run_pretrain_routine(model) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1196, in run_pretrain_routine False) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 293, in _evaluate output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 470, in evaluation_forward output = model.validation_step(*args) File "/opt/ml/code/finetune.py", line 211, in validation_step return self._generative_step(batch) File "/opt/ml/code/finetune.py", line 255, in _generative_step preds: List[str] = self.ids_to_clean_text(generated_ids) File "/opt/ml/code/finetune.py", line 153, in ids_to_clean_text generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True File "/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 2254, in batch_decode return [self.decode(seq, **kwargs) for seq in sequences] File "/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 2254, in <listcomp> return [self.decode(seq, **kwargs) for seq in sequences] File "/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py", line 439, in decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) File "/opt/conda/lib/python3.6/site-packages/tokenizers/implementations/base_tokenizer.py", line 267, in decode return self._tokenizer.decode(ids, skip_special_tokens=skip_special_tokens) TypeError ```<|||||>is there anything after TypeError? I guess this is a torch 1.6 issue.<|||||>No, it doesn't say anything after `TypeError`. Could be a 1.6 issue, let me try to downgrade and report back.<|||||>Hmm, something's off. Now I'm seeing `TypeError` when trying to save the tokenizer during a checkpoint. ``` Traceback (most recent call last): File "/root/ds-sandbox/projects/abstractive_summarization/finetuning-bart/finetuning_bart/summarization_trainer.py", line 214, in <module> _ = slack_wrapper(args) File "/opt/conda/lib/python3.6/site-packages/knockknock/slack_sender.py", line 105, in wrapper_sender raise ex File "/opt/conda/lib/python3.6/site-packages/knockknock/slack_sender.py", line 63, in wrapper_sender value = func(*args, **kwargs) File "/root/ds-sandbox/projects/abstractive_summarization/finetuning-bart/finetuning_bart/summarization_trainer.py", line 159, in main return finetune_main(args, model) File "/root/ds-sandbox/projects/abstractive_summarization/finetuning-bart/finetuning_bart/finetune.py", line 463, in main logger=logger, File "/root/ds-sandbox/projects/abstractive_summarization/finetuning-bart/finetuning_bart/lightning_base.py", line 448, in generic_train trainer.fit(model) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1003, in fit results = self.single_gpu_train(model) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 186, in single_gpu_train results = self.run_pretrain_routine(model) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine self.train() File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train self.run_training_epoch() File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 470, in run_training_epoch self.run_evaluation(test_mode=False) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 430, in run_evaluation self.on_validation_end() File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_hook.py", line 112, in on_validation_end callback.on_validation_end(self, self.get_model()) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py", line 12, in wrapped_fn return fn(*args, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 309, in on_validation_end self._do_check_save(filepath, current, epoch) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 346, in _do_check_save self._save_model(filepath) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 168, in _save_model self.save_function(filepath, self.save_weights_owandb: Program failed with code 1. Press ctrl-c to abort syncing. nly) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_io.py", line 268, in save_checkpoint checkpoint = self.dump_checkpoint(weights_only) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_io.py", line 379, in dump_checkpoint model.on_save_checkpoint(checkpoint) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py", line 12, in wrapped_fn return fn(*args, **kwargs) File "/root/ds-sandbox/projects/abstractive_summarization/finetuning-bart/finetuning_bart/lightning_base.py", line 223, in on_save_checkpoint self.tokenizer.save_pretrained(save_path) File "/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1379, in save_pretrained vocab_files = self.save_vocabulary(save_directory) File "/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py", line 449, in save_vocabulary files = self._tokenizer.save_model(save_directory) File "/opt/conda/lib/python3.6/site-packages/tokenizers/implementations/base_tokenizer.py", line 323, in save_model return self._tokenizer.model.save(directory, name=name) TypeError ``` I have the changes proposed in this PR in my code so it is clearly not the same.<|||||>^ is occurring on torch 1.5.1 and 1.6.0 both.<|||||>Update: I was creating a Fast tokenizer and passing it in during the creation of `SummarizationModule`. Switching to the one automatically created by the `BaseTransformer` avoids the last error.<|||||>The change in the PR is also related to the above error and creating a new tokenizer instead of an automatic initialization. Letting `SummarizationModule` and `BaseTransformer` deal with the creation of tokenizer instead doesn't raise those errors, however, that also means I can't use the Rust-based tokenizers.<|||||>Great catch! Would you mind making a new issue with a broken snippet that doesn't use PL? and tag @sshleifer E.g. ``` from transformers import BartTokenizerFast, BartForConditionalGeneration ... ``` ? Then we can try to fix the bug on the proper level of abstraction.<|||||>Thanks! Yes, agree that the right place for this is an issue. I'll try to create an issue that doesn't use PL later in the day.
transformers
6,609
closed
PegasusForConditionalGeneration - Error in loading state dictionary
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-1034-azure-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed - Using GPU in script?: Tried both - Using distributed or parallel set-up in script?: No ### Who can help @sshleifer ## Information Model I am using (Bert, XLNet ...): google/pegasus-arxiv The problem arises when using: the official example scripts: The tasks I am working on is: generating summary using pegasus-arxiv ## To reproduce Steps to reproduce the behavior: run the below script ```ruby mname = "google/pegasus-arxiv" model = PegasusForConditionalGeneration.from_pretrained(mname) ``` This is throwing error File "abstractive_summarizer.py", line 21, in <module> model = PegasusForConditionalGeneration.from_pretrained(mname, force_download=True) File "/anaconda/envs/py37_default/lib/python3.7/site-packages/transformers/modeling_utils.py", line 894, in from_pretrained model.__class__.__name__, "\n\t".join(error_msgs) RuntimeError: Error(s) in loading state_dict for PegasusForConditionalGeneration: size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). ## Expected behavior I tried running a sample in console this morning and it worked fine and I was able to generate summary using pegasus-arxiv. Once I transferred this to Jupyter notebook for some trial purpose, it downloaded pegasus-arxiv and after the same has been giving this error. (If not able to simulate, please try with force_download=True)
08-19-2020 23:31:38
08-19-2020 23:31:38
I just fixed. Can you try again. Should produce a warning but no error.<|||||>> I just fixed. Can you try again. Should produce a warning but no error. I am still getting the below error. It does not seem to even go to the tokenizer. It throws the error right when we acquire the pretrained model. It seems as though something about the checkpoint of the pre-trained model has changed RuntimeError: Error(s) in loading state_dict for PegasusForConditionalGeneration: size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).<|||||>I changed the state dict and deleted `model.encoder.embed_positions.weight` at 6pm EST. Your code works for me on master (93c5c9a5) command: `transformers-cli env` ``` - `transformers` version: 3.0.2 - Platform: Darwin-19.5.0-x86_64-i386-64bit - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` <|||||>> `model.encoder.embed_positions.weight` I completely removed transformers and took a fresh git and I am still getting the same error. Is there some other information I could provide that will help resolve this issue for me?<|||||>> I changed the state dict and deleted `model.encoder.embed_positions.weight` at 6pm EST. > > Your code works for me on master ([93c5c9a](https://github.com/huggingface/transformers/commit/93c5c9a528475db73c2b481131578b8dd903efba)) > > command: `transformers-cli env` > > ``` > - `transformers` version: 3.0.2 > - Platform: Darwin-19.5.0-x86_64-i386-64bit > - Python version: 3.7.7 > - PyTorch version (GPU?): 1.5.1 (False) > - Tensorflow version (GPU?): 2.2.0 (False) > - Using GPU in script?: <fill in> > - Using distributed or parallel set-up in script?: <fill in> > ``` The error occurs only for pegasus-arxiv. It works with warning for pegasus-pubmed and pegasus-large. I need help with pegasus-arxiv.
transformers
6,608
closed
How to use Huggingface model for continuous values directly?
Hi, I have a dataset which contains continuous values [ batch_size, features ] Features look like this : `[0.49221584, -0.021571456, -0.0920076, -0.14408934, -0.62306774]` I want to apply transformer model on these values and pass it to the final layer, something like this `batch_data ==> Transformer ==> output_layer ==> classification` Currently, I am using hand-coded multi-head attention and norm with the feed-forward network to pass these values to the transformer block. I Gone through hugging face models, but all the models accept tokens and sequences, Is there any way/hack How I can use hugging face transformer models on direct continuous values?
08-19-2020 20:58:12
08-19-2020 20:58:12
you could input the `input_embeds` tokens instead of the `input_ids` to the forward pass and adjust the `hidden_size` accordingly. Or you just tweak the model files yourself and you remove `nn.Embeddings` and replace it by a dense layer. Btw, normally you get much better answers for these kind of questions when you post it on https://discuss.huggingface.co/ . We try to move "non-bug" questions to this forum :-) <|||||>@patrickvonplaten Thank you for the reply, Sure from onwards, I'll post my doubts there. If you could provide any quick template to start with continuous values, that’d be helpful.<|||||>Hi, @patrickvonplaten I asked this question on discussion forum but didn't get any update yet. Can you provide any starter code/ Template where I can feed continuous values directly? https://discuss.huggingface.co/t/how-to-use-huggingface-model-for-continuous-values-directly/816<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,607
closed
[Longformer] try if multi gpu works
08-19-2020 20:24:59
08-19-2020 20:24:59
transformers
6,606
closed
Regression test for pegasus bugfix
The bug was that tokenizer.max_model_length (set through `tokenizer_config.json`) was sometimes 1024, but `max_position_embeddings` was only 512. This means that the tokenizer can produce inputs of length 513, which will produce an IndexError when we try to get the from the embedding table. Since the position embeddings are static for pegasus, the fix was on s3: set the max_position_embeddings to the correct value, and remove the saved, incorrectly sized position embeddings from the state dict. The tradeoff here is that users can now pass max_position_embeddings=HUGE to the model without error, and pass huge inputs, and either OOM or get shitty performance. But if you are modifying config you sort of know what you are doing so I'm OK with it. This PR: - suppresses the warning created by the S3 change - adds a regression test that `config.max_position_embeddings >= tokenizer.max_model_length`
08-19-2020 20:10:33
08-19-2020 20:10:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=h1) Report > Merging [#6606](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/18ca0e91402d17950b870d7c9f67ddb7fd573817?el=desc) will **decrease** coverage by `0.61%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6606/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6606 +/- ## ========================================== - Coverage 79.89% 79.28% -0.62% ========================================== Files 156 156 Lines 28213 28215 +2 ========================================== - Hits 22542 22370 -172 - Misses 5671 5845 +174 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3BlZ2FzdXMucHk=) | `100.00% <100.00%> (+9.09%)` | :arrow_up: | | [src/transformers/modeling\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19wZWdhc3VzLnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.33%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+6.20%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=footer). Last update [18ca0e9...68ace1e](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Just to get some context, why can't we fix the tokenizer's default value?<|||||>correct max_model_length differs between checkpoints <|||||>But isn't each tokenizer instantiated with a checkpoint? I'd expect this to be automatically done by ``` tokenizer1 = AutoTokenizer.from_pretrained("checkpoint_with_one_max_len") tokenizer2 = AutoTokenizer.from_pretrained("checkpoint_with_another_max_len") ```<|||||>Yeah. The tokenizer defaults have always been correct. The issue was that fixing the model defaults to match them created an inconsistency with the state_dict on s3. For cnn... - Broken converter creates model, config with 512 positional embeddings. Tokenizer is good -- says max_model_length=1024. - User reports IndexError bug. #6599 - Sam adjusts model config to position_embeddings=1024. - This creates a new error: at __init__ the cnn model will allocate space for 1024 positional embeddings, but the state dict on s3 will only have 512. #6909 - Sam deletes positional embeddings on S3. Code works but gives missing key warning. - Sam sends this PR to suppress warning, add regression test that checks that #6599 can't happen again.
transformers
6,605
closed
Add tests to Trainer
This PR moves the tests of the various `data_collator` in `test_data_collator.py` and adds tests of the Trainer on a simple regression problem. While testing, a few problems were uncovered: - The number of epochs is documented as a float but used as an int, fixed the documentation. - There was one more step done than specified by the argument `max_steps`. - The evaluation loss was wrong whenever the evaluation dataset length is not a round multiple of the batch size. Those three things are also fixed in the PR. With the regression infrastructure, we can add more tests (for custom data collator, optimizers, schedulers etc...) since each training is fast. Will do in follow-up PRs as this one was starting to be of a decent size already.
08-19-2020 20:00:38
08-19-2020 20:00:38
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=h1) Report > Merging [#6605](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe0b85e77a6af041471657069bbb9c21a880cd5c?el=desc) will **increase** coverage by `0.08%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6605/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6605 +/- ## ========================================== + Coverage 80.21% 80.30% +0.08% ========================================== Files 156 156 Lines 28178 28205 +27 ========================================== + Hits 22604 22650 +46 + Misses 5574 5555 -19 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <ø> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <ø> (ø)` | | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.26% <ø> (+10.67%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <100.00%> (+0.02%)` | :arrow_up: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `90.90% <100.00%> (ø)` | | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <100.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `83.50% <100.00%> (+0.13%)` | :arrow_up: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <100.00%> (+0.02%)` | :arrow_up: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.02% <100.00%> (+0.07%)` | :arrow_up: | | ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=footer). Last update [9a86321...3f89b2d](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Re. the eval loss, did you also run [test_trainer_distributed.py](https://github.com/huggingface/transformers/blob/master/tests/test_trainer_distributed.py) on a multi-gpu machine?<|||||>No, I don't have a multi-GPU machine setup. It does not seem like this test uses the eval_loss anywhere, it only computes a metric.<|||||>> No, I don't have a multi-GPU machine setup. You can use the office machines! Not necessarily related to this PR but to keep in mind to run this test once in a while<|||||>> Not necessarily related to this PR but to keep in mind to run this test once in a while It was indeed broken not due to this PR, fixed. Know how to run it periodically now :-)
transformers
6,604
closed
Fix confusing warnings during TF2 import from PyTorch
1. Swapped missing_keys and unexpected_keys. 2. Copy&paste error caused these warnings to say "from TF 2.0" when it's actually "from PyTorch".
08-19-2020 19:52:46
08-19-2020 19:52:46
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=h1) Report > Merging [#6604](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/18ca0e91402d17950b870d7c9f67ddb7fd573817?el=desc) will **decrease** coverage by `1.15%`. > The diff coverage is `66.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6604/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6604 +/- ## ========================================== - Coverage 79.89% 78.74% -1.16% ========================================== Files 156 156 Lines 28213 28213 ========================================== - Hits 22542 22216 -326 - Misses 5671 5997 +326 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <66.66%> (ø)` | | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-6.02%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-5.77%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.29%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=footer). Last update [18ca0e9...6ebffb5](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This fixes #5588 <|||||>This pull request has been replaced with #6623
transformers
6,603
closed
[cleanup] remove confusing newline
08-19-2020 19:08:53
08-19-2020 19:08:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=h1) Report > Merging [#6603](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/18ca0e91402d17950b870d7c9f67ddb7fd573817&el=desc) will **decrease** coverage by `1.12%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6603/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6603 +/- ## ========================================== - Coverage 79.89% 78.77% -1.13% ========================================== Files 156 156 Lines 28213 28213 ========================================== - Hits 22542 22226 -316 - Misses 5671 5987 +316 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-6.02%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <0.00%> (-3.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.29%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=footer). Last update [18ca0e9...1ac2967](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Next time, try writing a PR title that provides more context. Like "remove confusing newline". Then your current title could go in the description. Anyways, thanks for the contribution!<|||||>I'll try to be more clear next time, thanks.