repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
βŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
5,500
closed
batch_encode_plus model output is different from tokenizer.encode model's output
I am trying to encode multiple sentences with BertTokenizer. I tried batch_encode_plus but I am getting different output when I am feeding BertTokenizer's output vs batch_encode_plus's output to model. ``` single_sentence = 'checking single sentences' sentences = ['checking single sentences', 'many sentences encoding together', 'hello world how and', 'All model checkpoint weights were used when initializing BertModel.'] # loading models from transformers import BertModel, BertConfig, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = BertModel.from_pretrained('bert-large-uncased') # single encoding and hidden layer weights def single_query(sentence): single_input_id = torch.tensor(tokenizer.encode(sentence)).unsqueeze(0) # Batch size 1 outputs = model(single_input_id) features = outputs[0][:,0,:].detach().numpy() return features def batch_encoding(sentences): input_ids = torch.tensor(tokenizer.batch_encode_plus(sentences, pad_to_max_length=True)['input_ids']) outputs = model(all_f) features = outputs[0][:,0,:].detach().numpy() return features ``` single_query(single_sentence) return : ``` array([[-0.39814326, -0.4902882 , 0.02681825, ..., -0.28256905, -1.0546892 , 0.1055279 ]], dtype=float32) ``` while batch_enc = batch_encoding(sentences)[0] return : ``` array([ 0.1909762 , 0.05580305, 0.221862 , ..., 0.16220105, -0.88524836, 0.12994497], dtype=float32) ``` Why there is difference in the model's output? Is it because of padding?
07-03-2020 13:03:42
07-03-2020 13:03:42
Indeed, if you are using padding you need to provide the attention masks to your model otherwise it doesn't know which tokens it should not attend to. Here is the correct version of `batch_encoding` which will give the same output as the non batched version: ```python def batch_encoding(sentences): inputs = tokenizer(sentences, padding=True, return_tensors='pt') print(inputs) outputs = model(**inputs) features = outputs[0][:,0,:].detach().numpy() return features ``` I've also updated it to the new tokenizers API on which you can learn a lot more in the tutorial here: https://huggingface.co/transformers/preprocessing.html<|||||>@thomwolf I am using the pre-trained distilbert model. encoded_batch = tokenizer.batch_encode_plus(["hello", "there you"], add_special_tokens=True, return_tensors="tf", padding=True, truncation=True) `{'input_ids': <tf.Tensor: shape=(2, 4), dtype=int32, numpy= array([[ 101, 7592, 102, 0], [ 101, 2045, 2017, 102]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 4), dtype=int32, numpy= array([[1, 1, 1, 0], [1, 1, 1, 1]], dtype=int32)>}` encoded_batch[0][0] -> pointing to hidden states of "hello" <tf.Tensor: shape=(4, 768), dtype=float32, numpy= array([[-0.20557329, -0.18245512, 0.0950693 , ..., -0.06398913, 0.16745588, 0.37530535], [-0.48316494, -0.13651992, 0.3210112 , ..., 0.0130196 , 0.27123356, 0.15390822], [ 0.8976611 , 0.14261621, -0.40148023, ..., 0.31185225, -0.68173647, -0.2594604 ], [-0.0716978 , -0.18830499, 0.35636497, ..., 0.09993267, -0.05575091, 0.14572877]], dtype=float32)> and if i encode "hello" alone, <tf.Tensor: shape=(3, 768), dtype=float32, numpy= array([[-0.20557335, -0.18245521, 0.09506968, ..., -0.06398894, 0.16745585, 0.37530527], [-0.48316532, -0.13651986, 0.32101193, ..., 0.0130194 , 0.27123365, 0.15390885], [ 0.89766103, 0.14261642, -0.40148014, ..., 0.31185254, -0.68173563, -0.25946063]], dtype=float32)> so here, if you see, there is one row extra due to padding in the above instance, and there is no way to figure out if that is SEP embeds or padded embeds and that can lead to discrepancies. Is there some way to send inputs to the model in a batch, keeping the embeddings intact? Thanks in advance
transformers
5,499
closed
[ERROR] add_special_tokens = True not working in version 3.0.0
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): BERT Language I am using the model on (English, Chinese ...): Multi-Lingual The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: In version 2.11 Everything is OK but I updated to 3.0.0 and I can not create fixed_length with padding for the encoding of the text <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
07-03-2020 13:02:55
07-03-2020 13:02:55
Hi, can you provide more information (basically fill all the filed in the issue templates) in particular provide a clear example so we can try to reproduce the behavior?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,498
closed
What happened to https://huggingface.co/zero-shot/ ?
Hi, this page was very interesting. https://huggingface.co/zero-shot/ It is down since yesterday. What is up with it? Do you plan to bring it up again?
07-03-2020 12:42:16
07-03-2020 12:42:16
We're back up, and should now be setup to auto-restart when it fails. Thanks for the heads up!<|||||>Awsome! Thanks.<|||||>Hi! It seems the page is down again.<|||||>Man, streamlit's killing me. Thanks, rebooting now.<|||||>Ouch, this means we users are indirectly killing you! Thanks a lot. πŸ™‚ <|||||>This page is giving me a 502. reopen this issue, please<|||||>Back up, and fixed the issue with auto-relaunch. Thanks for the heads up.<|||||>Hi! Thanks for the great demo! The page seems to be down again.<|||||>This time it was actually just me making some changes. Back up.<|||||>I am getting a timeout on it... so, still not available I think...
transformers
5,497
closed
[Generation] better error message
If `cur_len` of input context is as long or longer than `max_length` a nice error message should be shown.
07-03-2020 12:40:22
07-03-2020 12:40:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=h1) Report > Merging [#5497](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/49281ac9390e19f30c30a914b11aa55b561973d1&el=desc) will **decrease** coverage by `0.23%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5497/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5497 +/- ## ========================================== - Coverage 77.82% 77.59% -0.24% ========================================== Files 141 141 Lines 24617 24619 +2 ========================================== - Hits 19159 19103 -56 - Misses 5458 5516 +58 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.71% <100.00%> (-1.47%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <100.00%> (+<0.01%)` | :arrow_up: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.01% <0.00%> (-5.11%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.89% <0.00%> (-1.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=footer). Last update [49281ac...3e5929c](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Should be trivial to merge this IMO. Is there any case why one would now receive an error if `max_length` < `cur_len` @sshleifer @yjernite ?<|||||>Is max_length still meant to include the length of input_ids?<|||||>yes! Would you change it to be added to the input?<|||||>I don't understand your response exactly. My preference is that `max_length` is completely independent of the ids sent to the encoder. That is how the current code works. For example, if we are summarizing a news article of length `X`, and max_length is 20, we should be allowed to generate 20 tokens regardless of `X`. For the text generation use case, I care less, but I have the same opinion. `X.shape[1]` should not matter. I don't have an opinion on whether `decoder_start_token_id` counts. When does this assert get hit?<|||||>The assert hits at the moment only for text-generation of "encoder" only models (GPT2, XLNET, ...) if `input_ids.shape[1] >= max_length`. In the case of all "conditional" generation (using an encoder + decoder, like Bart / T5) `max_length` is independent of the input to the encoder because auto-regressive generation is only done for the decoder => which is expected. The question is whether for text generation of encoder only models like gpt2 `max_length` should be changed to something like `max_tokens_to_generate` in which case the limit would be `input_ids.shape[1] + max_tokens_to_generate` (independent of `input_ids`). Not sure whether it's more intuitive to define the number of max length tokens **to be generated** or better the max length of the complete text, also given the name `max_length`. <|||||>I think `max_length` should refer to the maximum number of tokens that can be generated. For example, if you wanted to make a next word suggester, it would be much simpler to have max_length=1, than it would be check how many input_ids != pad_token_id for each entry in your batch and then add 1 to that. I don't think there are as many use cases where you need N tokens regardless of the length of the input.
transformers
5,496
closed
QA pipeline BART compatible
07-03-2020 11:46:03
07-03-2020 11:46:03
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=h1) Report > Merging [#5496](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21cd8c40862ba356096ab4cda31563ee3a35c1bb&el=desc) will **increase** coverage by `1.14%`. > The diff coverage is `85.71%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5496/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5496 +/- ## ========================================== + Coverage 76.39% 77.54% +1.14% ========================================== Files 141 141 Lines 24617 24622 +5 ========================================== + Hits 18807 19092 +285 + Misses 5810 5530 -280 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.34% <50.00%> (+0.13%)` | :arrow_up: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.12% <100.00%> (+0.12%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.01% <0.00%> (-5.11%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.92% <0.00%> (-0.51%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.72% <0.00%> (+73.10%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=footer). Last update [21cd8c4...b716a86](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,495
closed
Typo fix in `training` doc
`provides` -> `provided`
07-03-2020 11:26:05
07-03-2020 11:26:05
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=h1) Report > Merging [#5495](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8438bab38e1ea60efca181c92ebc7e4602f91848&el=desc) will **increase** coverage by `0.76%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5495/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5495 +/- ## ========================================== + Coverage 76.97% 77.74% +0.76% ========================================== Files 141 141 Lines 24617 24617 ========================================== + Hits 18950 19138 +188 + Misses 5667 5479 -188 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5495/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `71.83% <0.00%> (-23.95%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5495/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.41% <0.00%> (-2.27%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5495/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+1.32%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5495/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5495/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=footer). Last update [8438bab...74a8457](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for the fix!
transformers
5,494
closed
The inference speed of gpt2-xl has a gap between pytorch and tensorflow.
**Environment:** - OS: Ubuntu 18.04 - Python: 3.7.6 - Transformers: 3.0.0 - PyTorch: 1.4.0 - Tensorflow: 2.2.0 - CUDA: 10.1 - CUDNN: 7.6 - GPU: V100 **My code:** ```import time import torch import tensorflow as tf from transformers import AutoTokenizer, AutoModelWithLMHead, TFAutoModelWithLMHead TIMES = 100 tokenizer = AutoTokenizer.from_pretrained("./gpt2-xl") # pytorch model = AutoModelWithLMHead.from_pretrained("./gpt2-xl") model = model.to("cuda") input = tokenizer.encode("This is the benchmark of gpt2-xl.", return_tensors="pt").to("cuda") total = 0 cnt = 0 start = torch.cuda.Event(enable_timing=True) end = torch.cuda.Event(enable_timing=True) for i in range(TIMES): start.record() o = model(input) end.record() torch.cuda.synchronize() if not i: continue total += start.elapsed_time(end)/1000 cnt += 1 print("Pytorch version --- cnt: {}, avg_time_cost: {}s".format(cnt, total/cnt)) # tensorflow gpus = tf.config.experimental.list_logical_devices('GPU') gpu = gpus[0].name with tf.device(gpu): model = TFAutoModelWithLMHead.from_pretrained("./gpt2-xl") input = tokenizer.encode("This is the benchmark of gpt2-xl.", return_tensors="tf") total = 0 cnt = 0 with tf.device(gpu): for i in range(TIMES): start = time.time() o = model(input) end = time.time() if not i: continue total += (end-start) cnt += 1 print("Tensorflow version --- cnt: {}, avg_time_cost: {}s".format(cnt, total/cnt)) ``` **Output:** ``` Pytorch version --- cnt: 99, avg_time_cost: 0.05521844493981564s Tensorflow version --- cnt: 99, avg_time_cost: 0.2912752628326416s ``` **The utilization of gpu:** - PyTorch 33% - Tensorflow 8% I'm new to Transformers. Did i use it in a wrong way, or any mistakes in my code? Thanks!
07-03-2020 09:24:01
07-03-2020 09:24:01
Turning off the eager mode of tensorflow makes inference much faster.
transformers
5,493
closed
Create README.md
07-03-2020 08:57:02
07-03-2020 08:57:02
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=h1) Report > Merging [#5493](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21cd8c40862ba356096ab4cda31563ee3a35c1bb&el=desc) will **increase** coverage by `1.82%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5493/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5493 +/- ## ========================================== + Coverage 76.39% 78.22% +1.82% ========================================== Files 141 141 Lines 24617 24617 ========================================== + Hits 18807 19257 +450 + Misses 5810 5360 -450 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+8.92%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (+17.80%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=footer). Last update [21cd8c4...5e82926](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,492
closed
Update README.md
I set the language
07-03-2020 08:55:27
07-03-2020 08:55:27
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=h1) Report > Merging [#5492](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21cd8c40862ba356096ab4cda31563ee3a35c1bb&el=desc) will **increase** coverage by `1.82%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5492/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5492 +/- ## ========================================== + Coverage 76.39% 78.22% +1.82% ========================================== Files 141 141 Lines 24617 24617 ========================================== + Hits 18807 19257 +450 + Misses 5810 5360 -450 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+8.92%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (+17.80%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=footer). Last update [21cd8c4...31db433](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,491
closed
Update README.md
07-03-2020 08:54:24
07-03-2020 08:54:24
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=h1) Report > Merging [#5491](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21cd8c40862ba356096ab4cda31563ee3a35c1bb&el=desc) will **increase** coverage by `1.82%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5491/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5491 +/- ## ========================================== + Coverage 76.39% 78.22% +1.82% ========================================== Files 141 141 Lines 24617 24617 ========================================== + Hits 18807 19257 +450 + Misses 5810 5360 -450 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+8.92%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (+17.80%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=footer). Last update [21cd8c4...cf45716](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,490
closed
[ERROR] Tokenizer and TokenizerFast ???
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): BERT Language I am using the model on (English, Chinese ...): 'bert-base-multilingual-cased' The problem arises when using: * [ x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. `from transformers import *` 2. `tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-cased')` 3. `tokenizer.decode(tokenizer.encode('mở bΓ i lαΊ‘c trΓ΄i'))` --> wrong but: 1. `from transformers import *` 2. `tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')` 3. `tokenizer.decode(tokenizer.encode('mở bΓ i lαΊ‘c trΓ΄i'))` --> true <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior the decode sentence after encoding and decoding using TokenizerFast should be true <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Pytorch and TF - Python version: 3.6 - PyTorch version (GPU?): GPU - Tensorflow version (GPU?): 2.2 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
07-03-2020 07:35:59
07-03-2020 07:35:59
This is related to https://github.com/huggingface/transformers/issues/2917. In the slow tokenizers, when `do_lower_case=False` we don't strip accents, while we do it when `do_lower_case=True`. In the fast tokenizers, this is controlled by the `strip_accents` option, which is `True` here. @thomwolf How do you think we should fix this?<|||||>Yes let's do it @n1t0 and stick to the official bert tokenizer behavior in the fast tokenizers as well.
transformers
5,489
closed
encoder_outputs are always the same when generating with different inputs
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> HI, I've trained a bert2bert model to generate answers with different questions. But after training, the bert2bert model always produces the same encoder_outputs with different inputs. Does anyone know how to fix or avoid the problem? If I dont resize the bert's embedding size, will this solve the problem? Thanks in advance. ## The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ## The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## Environment info - `transformers` version: 2.11.0 - Platform: linux - Python version: 3.7 64bit - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?): No - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Using parallel setting only Below is my training code. The inputs are turned to indices by tokenizer.encode_plus ``` import logging import os import sys import inspect import json import argparse from dataclasses import dataclass, fields from tqdm.auto import tqdm, trange import torch from torch.utils.data import DataLoader from transformers import ( EncoderDecoderModel, AdamW, get_linear_schedule_with_warmup, BertTokenizer, PreTrainedModel ) # import utils logger = logging.getLogger(__name__) @dataclass class training_args: weight_decay: float = 0.0 learning_rate: float = 5e-5 adam_epsilon: float = 1e-8 warmup_steps: int = 0 gradient_accumulation_steps: int = 1 # num_train_epochs: 10 max_grad_norm: float = 1.0 early_stop: float = 1e-5 stop_barrier: float = 1e-5 def set_args(): parser = argparse.ArgumentParser() parser.add_argument("--vocab_file", default='vocab_trad_clean.txt') # parser.add_argument("--encoder_config", default='Configs/encoder.json') # parser.add_argument("--decoder_config", default='Configs/decoder.json') parser.add_argument("--data_folder", required=True) # parser.add_argument("--output_folder", required=True) # parser.add_argument("--from_pretrained", action='store_true') parser.add_argument("--logging_steps", default=1000, type=int) parser.add_argument("--save_total_limit", default=5, type=int) parser.add_argument("--save_steps", default=10000, type=int) parser.add_argument("--batch_size", default=20, type=int) parser.add_argument("--num_train_epochs", default=30, type=int) args = parser.parse_args() return args class Generator_Data(Dataset): def __init__(self, data): super(Generator_Data, self).__init__() self.inputs = [] self.outputs = [] for example in data: self.inputs.append(example['source']) self.outputs.append(example['target']) def __len__(self): return len(self.inputs) def __getitem__(self, index): return self.inputs[index], self.outputs[index] def collate_fn(batch): input_dict = { "input_ids": [], "decoder_input_ids": [], "labels": [], } for data in batch: input_data = data[0] output_data = data[1] input_dict["input_ids"].append(input_data["input_ids"]) input_dict["decoder_input_ids"].append(output_data["input_ids"]) input_dict["labels"].append(output_data["input_ids"]) input_dict = {k: torch.LongTensor(v) for k, v in input_dict.items()} return input_dict def Get_DataLoader(data_file, batch_size, training=False): if not os.path.isfile(data_file): raise Exception(f"data file [{data_file}] doesn\'t exist in util, LoadDataset") logger.info(f"start loading data from {data_file}") data = torch.load(data_file) dataset = Generator_Data(data) logger.info("turn dataset into dataloader") if training: loader = DataLoader(dataset, batch_size, shuffle=True, collate_fn=collate_fn) else: loader = DataLoader(dataset, batch_size, shuffle=False, collate_fn=collate_fn) return loader if __name__ == "__main__": args = set_args() # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) tokenizer = BertTokenizer.from_pretrained('bert-base-chinese', vocab_file=args.vocab_file) tokenizer.add_tokens('[NewLine]') tokenizer.add_tokens('[space]') args.output_folder = 'Seq2Seq_Transformers/Model/test' os.makedirs(args.output_folder, exist_ok=True) tokenizer.save_pretrained(args.output_folder) model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-chinese", "bert-base-chinese") model.encoder.resize_token_embeddings(len(tokenizer)) model.decoder.resize_token_embeddings(len(tokenizer)) model.config.encoder.vocab_size = len(tokenizer) model.config.decoder.vocab_size = len(tokenizer) if torch.cuda.is_available(): args.device = torch.device("cuda") args.n_gpu = torch.cuda.device_count() else: args.device = torch.device("cpu") args.n_gpu = 0 model.to(args.device) if args.n_gpu > 1: model = torch.nn.DataParallel(model) # loading the data train_pt_file = os.path.join(args.data_folder, 'train.pt') valid_pt_file = os.path.join(args.data_folder, 'valid.pt') train_dataloader = Get_DataLoader(train_pt_file, batch_size=args.batch_size, training=True) valid_dataloader = Get_DataLoader(valid_pt_file, batch_size=args.batch_size) # Prepare optimizer and schedule (linear warmup and decay) t_total = int(len(train_dataloader) // training_args.gradient_accumulation_steps * args.num_train_epochs) no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": training_args.weight_decay }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] optimizer = AdamW(optimizer_grouped_parameters, lr=training_args.learning_rate, eps=training_args.adam_epsilon) scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=training_args.warmup_steps, num_training_steps=t_total ) # start training logger.info("***************************") for field in fields(training_args): logger.info(f"{field.name}: {getattr(training_args, field.name)}") logger.info("***************************") global_step = 0 tr_loss = 0.0 logging_loss = 0.0 loss_scalar = 1000000 previous_loss_scaler = -1 model.train() model.zero_grad() for epoch in tqdm(range(args.num_train_epochs), desc="Epoch", ascii=True): epoch_iterator = tqdm(train_dataloader, desc="Iteration", ascii=True) for step, inputs in enumerate(epoch_iterator): model.train() for k, v in inputs.items(): inputs[k] = v.to(args.device) outputs = model(**inputs) # loss, outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=inputs["input_ids"], lm_labels=inputs["input_ids"])[:2] loss = outputs[0] # model outputs are always tuple in transformers (see doc) if args.n_gpu > 1: loss = loss.mean() # mean() to average on multi-gpu parallel training if training_args.gradient_accumulation_steps > 1: loss = loss / training_args.gradient_accumulation_steps loss.backward() tr_loss += loss.item() if (step + 1) % training_args.gradient_accumulation_steps == 0 or ( # last step in epoch but step is always smaller than gradient_accumulation_steps len(epoch_iterator) <= training_args.gradient_accumulation_steps and (step + 1) == len(epoch_iterator) ): torch.nn.utils.clip_grad_norm_(model.parameters(), training_args.max_grad_norm) optimizer.step() scheduler.step() model.zero_grad() global_step += 1 if args.logging_steps > 0 and global_step % args.logging_steps == 0: logs = {} loss_scalar = (tr_loss - logging_loss) / args.logging_steps learning_rate_scalar = scheduler.get_last_lr()[0] logs["learning_rate"] = learning_rate_scalar logs["loss"] = loss_scalar logs["loss_difference"] = abs(loss_scalar-previous_loss_scaler) previous_loss_scaler = loss_scalar logging_loss = tr_loss epoch_iterator.write(json.dumps({**logs, **{"step": global_step}})) if loss_scalar < training_args.early_stop:# or logs["loss_difference"] < training_args.stop_barrier: break if args.save_steps > 0 and global_step % args.save_steps == 0: # Save model checkpoint output_dir = os.path.join(args.output_folder, f"checkpoint-{global_step}") os.makedirs(output_dir, exist_ok=True) logger.info("Saving model checkpoint to %s", output_dir) # Save a trained model and configuration using `save_pretrained()`. # They can then be reloaded using `from_pretrained()` if isinstance(model, torch.nn.DataParallel): model = model.module if not isinstance(model, PreTrainedModel): raise ValueError("Trainer.model appears to not be a PreTrainedModel") model.save_pretrained(output_dir) torch.save(optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt")) torch.save(scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt")) logger.info("Saving optimizer and scheduler states to %s", output_dir) if loss_scalar < training_args.early_stop: break output_dir = args.output_folder os.makedirs(output_dir, exist_ok=True) logger.info("Saving model checkpoint to %s", output_dir) # Save a trained model and configuration using `save_pretrained()`. # They can then be reloaded using `from_pretrained()` if isinstance(model, torch.nn.DataParallel): model = model.module if not isinstance(model, PreTrainedModel): raise ValueError("Trainer.model appears to not be a PreTrainedModel") model.save_pretrained(output_dir) ``` Besides, for each time step, encoder_outputs are the same, like the picture below. I think it's very strange. I am not sure if they are the same problems. ![image](https://user-images.githubusercontent.com/15016623/86547733-8c556000-bf6c-11ea-8a41-76524a98f0b3.png)
07-03-2020 06:44:13
07-03-2020 06:44:13
Hmm, this will be hard to debug here. I'm currently working on getting a working example of a Bert2Bert model, so I will keep an eye on `encoder_output` bugs! See conversation here: https://github.com/huggingface/transformers/issues/4443#issuecomment-656691026<|||||>Thank you for your reply. I am looking forward your Bert2Bert example. And I hope we can solve this problem.<|||||>Hey @bobshih, Training a Bert2Bert model worked out fine for me - I did not experience any bugs related to `encoder_outputs`. You can check out the model and all the code to reproduce the results here: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16 Maybe you can take a look, adapt your code and see whether the error persists :-) <|||||>OK, thank for your attention. I will adapt my code after finishing my work at hand. <|||||>Hi, @patrickvonplaten, I have trained EncoderDecoderModel with your training example script. I noticed that if there are too many padding tokens in training data, it will make the trained model produce the same vectors despite the different inputs. but I wonder why attention mask does not work? In my original training setting, there are 93% padding tokens. After I reduce the max length and make padding tokens decrease to 21%, the encoderdecoder model works without problems.<|||||>This line: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script: ```python batch["labels"] = [ [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"] ] ``` in the preprocessing should make sure that the PAD token does not influence the loss and thus also not the model.<|||||>> This line: > > https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script: > > ```python > batch["labels"] = [ > [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"] > ] > ``` > > in the preprocessing should make sure that the PAD token does not influence the loss and thus also not the model. Yes, I understand what you mention, and I also use this setting for models after adapting my script, but the problem shows again. I will train the model again with this setting in the weekend. And I hope there will be a different result. Again, thank you very much for solving the problem and patience.
transformers
5,488
closed
Cannot train RoBERTa from scratch with multiple nodes and multiple GPUs
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: - [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) - [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Use the `transformers/examples/language-modeling/run_language_modeling.py` to train RoBERT model from scratch. 2. I am using the Slurm tool to submit a job that requires multiple nodes and multiple GPUs. In this case, it requests 2 nodes and 2 GPUs for each. The shell scripts: ``` #!/bin/bash #SBATCH --time=3:00:00 #SBATCH --ntasks=2 #SBATCH --nodes=2 #SBATCH --cpus-per-task=2 #SBATCH --gres=gpu:2 #SBATCH --mem=64G #SBATCH --job-name=pre-train #SBATCH --output=pre-train.out #SBATCH --account=XXXXXX module load gcc module load cuda cudnn module load openmpi nccl source ~/roberta/bin/activate export NCCL_DEBUG=INFO export NPROC_PER_NODE=2 export HDF5_USE_FILE_LOCKING='FALSE' export PARENT=`/bin/hostname -s` export MPORT=13000 export CHILDREN=`scontrol show hostnames $SLURM_JOB_NODELIST | grep -v $PARENT` export HOSTLIST="$PARENT $CHILDREN" echo $HOSTLIST export WORLD_SIZE=$SLURM_NTASKS srun distributed_runner.sh ``` 3. `distributed_runner.sh` script is: ``` #!/bin/bash /bin/hostname -s source ~/roberta/bin/activate python3 -m torch.distributed.launch \ --nproc_per_node=$NPROC_PER_NODE \ --nnodes=$SLURM_JOB_NUM_NODES \ --node_rank=$SLURM_PROCID \ --master_addr="$PARENT" --master_port="$MPORT" \ run_language_modeling.py \ --gradient_accumulation_steps=16 \ --train_data_file="./data/sample.txt" \ --output_dir="./sample_model/" \ --model_type=roberta \ --mlm \ --local_rank=$SLURM_LOCALID \ --config_name="./sample_config" \ --tokenizer_name="./sample_config" \ --do_train \ --line_by_line \ --learning_rate=1e-4 \ --num_train_epochs=40 \ --save_total_limit=5 \ --save_steps=20 \ --per_gpu_train_batch_size=16 \ --seed=42 ``` 4. `sample.txt` is a dummy training set with 20K lines where each line is a text string. Data is already pre-processed following the official instruction. `sample_config/` includes all the pre-processed vocabulary and config files: config.json, merges.txt, tokenizer_config.json, and vocab.json. 5. The job can be successfully launched on two nodes, with no error. But **each node only uses one GPU** based on the GPU usage with `nvidia-smi` command. **Node 1** ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:3B:00.0 Off | 0 | | N/A 40C P0 70W / 300W | 21175MiB / 32510MiB | 100% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100-SXM2... On | 00000000:86:00.0 Off | 0 | | N/A 35C P0 42W / 300W | 11MiB / 32510MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 190822 C /home/py3.6/bin/python3 13003MiB | | 0 190823 C /home/py3.6/bin/python3 8161MiB | +-----------------------------------------------------------------------------+ ``` **Node 2** ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:18:00.0 Off | 0 | | N/A 43C P0 77W / 300W | 17127MiB / 32510MiB | 97% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100-SXM2... On | 00000000:3B:00.0 Off | 0 | | N/A 33C P0 40W / 300W | 11MiB / 32510MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 209232 C /home/py3.6/bin/python3 6899MiB | | 0 209233 C /home/py3.6/bin/python3 10217MiB | +-----------------------------------------------------------------------------+ ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Expect each node can use two GPUs. Namely, the Processes should show the usage of GPU 0 and 1 for both nodes. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.1 - Platform: CentOS Linux 7 (Core) - Python version: Python 3.6.3 - PyTorch version (GPU?): torch==1.5.1 - Tensorflow version (GPU?): tensorflow-gpu==2.1.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes
07-03-2020 06:13:40
07-03-2020 06:13:40
I figured out the issue. I should not set `--local_rank=$SLURM_LOCALID` in the argument. `torch.distributed.launch` will automatically pass the right --local_rank value to run_language_modeling.py.
transformers
5,487
closed
Better TPU Support in examples
# πŸš€ Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> I tried to train BERT on TPU recently, and found your [examples](https://github.com/huggingface/transformers/blob/master/examples) has done work on this topic. However, some code looks experimental and not perfectly ready to be used. The combination of [xla and pytorch-lightening](https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py) looks great, but seems not be used neither in any training scripts nor in documents. I'd like to know when will those codes be ready, and I'd be very glad to contribute some code.
07-03-2020 05:53:19
07-03-2020 05:53:19
I agree with this request. TPU training pipeline is very fragile, lacking and needs more attention. Encouraging more use of TPU by providing easy examples would increase its usage resulting in a higher quality TPU Training system over time as more people contribute.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,486
closed
Tokenizers throwing warning "The current process just got forked, Disabling parallelism to avoid deadlocks.. To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)"
I know this warning is because the transformer library is updated to 3.x. I know the warning saying to set TOKENIZERS_PARALLELISM = true / false My question is where should i set TOKENIZERS_PARALLELISM = true / false is this when defining tokenizers like ``` tok = Tokenizer.from_pretrained('xyz', TOKENIZERS_PARALLELISM=True) // this doesn't work ``` or is this when encoding text like ``` tok.encode_plus(text_string, some=some, some=some, TOKENIZERS_PARALLELISM = True) // this also didn't work ``` Suggestions anyone?
07-03-2020 05:13:36
07-03-2020 05:13:36
This might help you: https://stackoverflow.com/questions/62691279/how-to-disable-tokenizers-parallelism-true-false-warning<|||||>I suspect this may be caused by loading data. In my case, it happens when my dataloader starts working.<|||||>This is happening whenever you use `multiprocessing` (Often used by data loaders). The way to disable this warning is to set the `TOKENIZERS_PARALLELISM` environment variable to the value that makes more sense for you. By default, we disable the parallelism to avoid any hidden deadlock that would be hard to debug, but you might be totally fine while keeping it enabled in your specific use-case. You can try to set it to `true`, and if your process seems to be stuck, doing nothing, then you should use `false`. We'll improve this message to help avoid any confusion (Cf https://github.com/huggingface/tokenizers/issues/328)<|||||>I may be a rookie, but it seems like it would be useful to indicate that this is an environment variable in the warning message.<|||||>You are totally right! In the latest version `3.0.2`, the warning message should be a lot better, and it will trigger only when necessary.<|||||>Hi, sorry to bump this thread... I'm having the same problem however, the tokenizer is used only in my model. Data loading is made with multiple workers but it is only loading raw text which is then given to the model and only the model uses the tokenizer. I don't have multi model or whatever, just a classic pytorch model. Thus I was wondering how can I have the warning. Thanks in advance, Have a great day :) <|||||>You must be using a tokenizer before using `multiprocessing`. When your process gets forked, you see this message because it detects that a fork is happening and that some kind of parallelism was used before.<|||||>@n1t0, Thanks a lot for the fast reply, I guess it detect a fork even if it's safe for me to do so... Yes my process is forked but not the tokenizer. Then I will use the env variable to remove the warning. <|||||>I use ```tokenizer``` in my data loader. If that is the source of this problem (hence disabling the parallelization --> hence slow training), then what is the solution? Using ```tokenizer``` in the pre-processing step? <|||||>After testing, it is found that when the data in a dataloader is processed by the token, and the datalodaer jumps out before it is finished, this warning will be triggered; I give a code example: ``` # for example, following code will trigger the warning for texts in train_dataloader: _ = tokenizer.batch_encode_plus(texts) # loader has not been traversed # but texts are used break for texts in test_dataloader: # warning ... pass or break # and following code will not trigger the warning for texts in train_dataloader: # loader has not been traversed # but texts are not used break for texts in test_dataloader: # No warning pass or break ```<|||||>@hbchen121 my dataloader processes the text in init function During data loading time, directly input_ids and attention masks are fetched, yet I get this warning.<|||||>Despite [the documentation](https://huggingface.co/transformers/v3.0.2/model_doc/auto.html) saying that `use_fast` defaults to `False`, adding `use_fast=False` so that it's `AutoTokenizer.from_pretrained(model_name, use_fast=False)` removed this warning for me. If I just use `AutoTokenizer.from_pretrained(model_name)`, the warning pops up again.
transformers
5,485
closed
Bert-extractive-summaizer importing issue
Hi , I am facing issues while import summarizer. NameError Traceback (most recent call last) <ipython-input-6-3b4384c20fe2> in <module>() ----> 1 from summarizer import Summarizer 2 3 body = 'Text body that you want to summarize with BERT' 4 body2 = 'Something else you want to summarize with BERT' 5 model = Summarizer() ~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/summarizer/__init__.py in <module>() ----> 1 from summarizer.model_processors import Summarizer, SingleModel, TransformerSummarizer ~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/summarizer/model_processors.py in <module>() ----> 1 from summarizer.bert_parent import BertParent 2 from summarizer.cluster_features import ClusterFeatures 3 from summarizer.sentence_handler import SentenceHandler 4 from typing import List 5 from abc import abstractmethod ~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/summarizer/bert_parent.py in <module>() 9 10 ---> 11 class BertParent(object): 12 13 """ ~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/summarizer/bert_parent.py in BertParent() 16 17 MODELS = { ---> 18 'bert-base-uncased': (BertModel, BertTokenizer), 19 'bert-large-uncased': (BertModel, BertTokenizer), 20 'xlnet-base-cased': (XLNetModel, XLNetTokenizer), **NameError: name 'BertModel' is not defined** Please help me on this issue
07-03-2020 04:25:34
07-03-2020 04:25:34
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,484
closed
Error using t5-base-cnn
I'm trying to fine-tune all the t5 models over CNN/DailyMail to see how they perform compared to the BART ones. I came across your t5-base-cnn model day. I tried using it in the way mentioned but got interrupted by an error that says : OSError: Model name 'sshleifer/t5-base-cnn' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'sshleifer/t5-base-cnn' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url. Earlier when I fine-tuned t5-small over CNN, in the output file there was this spiece.model but it's not present in your listed files. Any suggestions how to get over this so that I can use your t5-base-cnn instead of fine-tuning all-over myself? Thanks. @sshleifer
07-03-2020 04:19:11
07-03-2020 04:19:11
Use the standard `T5Tokenizer.from_pretrained('t5-base')`<|||||>and I would love to hear your results!<|||||>The outputs of t5-base-cnn are good! Will let you know when I run over a bigger dataset. My doubt is how many epochs is ideal for fine-tuning t5 models over cnn/dm? My t5-small fine-tuned over cnn/dm code(number of epochs ran : 1) produces okayish results but not that great. Even some sentences in the output didn't get completed at the end and looked as if it's cut.<|||||>I don't know how many epochs to train t5-small for, but our new `finetune.py` tracks the validation rouge 2 score. Usually when that stops increasing, the model will not further improve. Also, since epochs are so long, there is a `--val_check_interval` argument that you can use to check this statistic more frequently than the default, every epoch.<|||||>Thank you, Sam. You solve my problem too. Benefit a lot from your work.<|||||>Happy to help. Closing this for now, feel free to open a new issue if you run into more problems!<|||||>Hello everyone! I used direct T5Tokenizer.from_pretrained('t5-base')... however, I got the following error: OSError: Model name 't5-base' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 't5-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url. How can I resolve this issue? or from where can I download the model manually? Thank you in advance <|||||>I tried to generate results using ```sshleifer/t5-base-cnn``` and changed the tokenizer to ```tokenizer = T5Tokenizer.from_pretrained('t5-base')``` and failed. My code: ``` python run_eval.py sshleifer/t5-base-cnn $DATA_DIR/test.source $OUTPUT_FILE \ --reference_path $DATA_DIR/test.target \ --task summarization \ --device cuda \ --fp16 \ --bs 32 \ ``` Following is the error message: ``` Traceback (most recent call last): File "run_eval.py", line 127, in <module> run_generate() File "run_eval.py", line 112, in run_generate checkpoint_path=args.checkpoint_path, File "run_eval.py", line 63, in generate_summaries_or_translations **gen_kwargs, File "/home/rachelzheng/www-joy/venv/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad return func(*args, **kwargs) File "/home/rachelzheng/www-joy/venv/lib/python3.6/site-packages/transformers/generation_utils.py", line 480, in generate model_kwargs=model_kwargs, File "/home/rachelzheng/www-joy/venv/lib/python3.6/site-packages/transformers/generation_utils.py", line 795, in _generate_beam_search input_ids = torch.cat([input_ids, beam_tokens.unsqueeze(1)], dim=-1) RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorMath.cu:196 /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[33,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[34,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[35,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[36,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[37,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[38,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[39,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[40,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[41,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[42,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[43,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[44,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[45,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[46,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[47,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[48,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[49,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[50,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[51,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[52,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[53,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[54,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[55,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[56,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[57,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[58,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[59,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[60,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[61,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[62,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[63,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[64,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[65,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[66,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[67,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[68,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[69,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[70,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[71,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[72,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[73,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[74,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[75,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[76,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[77,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[78,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[79,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[80,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[81,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[82,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[83,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[84,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[85,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[86,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[87,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[88,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[89,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[90,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[91,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[92,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[93,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[94,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[95,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[96,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[97,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[98,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[99,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[100,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[101,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[102,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[103,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[104,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[105,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[106,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[107,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[108,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[109,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[110,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[111,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[112,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[113,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[114,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[115,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[116,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[117,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[118,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[119,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[120,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[121,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[122,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[123,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[124,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[125,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[126,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[127,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[1,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[2,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[7,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[8,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[9,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[10,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[11,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[12,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[13,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[14,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[15,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[16,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[17,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[18,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[19,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[20,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[21,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[22,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[23,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[24,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[25,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[26,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[27,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[28,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[29,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[30,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[31,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. ```<|||||>Okay it looks like ```fp16``` leads to the problem. Remove ```--fp16``` solves my problem.
transformers
5,483
closed
can't get models directory after running python run_squad.py
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): English The problem arises when using: after running "python run_squad.py ", I didn't get models directory. running time of my code in colab is only 20 minutes, I think the training process is not done, what's the problem? How to solve that? The tasks I am working on is: SQUaD ## To reproduce Steps to reproduce the behavior: 1.https://qa.fastforwardlabs.com/pytorch/hugging%20face/wikipedia/bert/transformers/2020/05/19/Getting_Started_with_QA.html 2.click the "Open in Colab" button above !python run_squad.py \ --model_type bert \ --model_name_or_path bert-base-uncased \ --output_dir models/bert/ \ --data_dir data/squad \ --overwrite_output_dir \ --overwrite_cache \ --do_train \ --train_file train-v2.0.json \ --version_2_with_negative \ --do_lower_case \ --do_eval \ --predict_file dev-v2.0.json \ --per_gpu_train_batch_size 2 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --threads 10 \ --save_steps 5000 ## Expected behavior ## Environment info colab GPU - `transformers` version: - Platform:colab - Python version:3.6.9 - PyTorch version (GPU?):1.5.1+cu101 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
07-03-2020 01:25:18
07-03-2020 01:25:18
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,482
closed
Can't pickle tokenizers ...
Tokenizers are losing all their special tokens when un-pickled. Not sure if there are other attributes that aren't being rehydrated as well ... ``` hf_tokenizer # <transformers.tokenization_roberta.RobertaTokenizer at 0x7fc4ce625f10> hf_tokenizer.special_tokens_map # {'bos_token': '<s>', # 'eos_token': '</s>', # 'unk_token': '<unk>', # 'sep_token': '</s>', # 'pad_token': '<pad>', # 'cls_token': '<s>', # 'mask_token': '<mask>'} pickle.dump( hf_tokenizer, open( "save.p", "wb" ) ) hf_tokenizer = pickle.load( open( "save.p", "rb" ) ) hf_tokenizer.special_tokens_map # {'bos_token': '', # 'eos_token': '', # 'unk_token': '', # 'sep_token': '', # 'pad_token': '', # 'cls_token': '', # 'mask_token': ''} ```
07-03-2020 01:03:17
07-03-2020 01:03:17
Hi, ok I can reproduce, this is a bug in the `AddedToken`class of `huggingface/tokenizers`. Moving this up.
transformers
5,481
closed
Merge pull request #1 from huggingface/master
Version track
07-03-2020 00:20:18
07-03-2020 00:20:18
transformers
5,480
closed
'Size' Error while loading t5-large model
# πŸ› Bug ## Information Model I am using t5-large: Language I am using the model on English The problem arises when using: ``` from transformers import T5Tokenizer, T5Model tokenizer = T5Tokenizer.from_pretrained('t5-large') model = T5Model.from_pretrained('t5-large') input_ids = tokenizer('sentence embeddings from t5 model', return_tensors="pt") outputs = model(input_ids=input_ids, decoder_input_ids=input_ids) ``` Error message : ``` KeyError: 'size' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) <ipython-input-2-0ba21adf6547> in <module> 1 input_ids = tokenizer('chyuu wow this is working', return_tensors="pt") ----> 2 outputs = model(input_ids=input_ids, decoder_input_ids=input_ids) ~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/tfproject/tfenv/lib/python3.7/site-packages/transformers/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, decoder_past_key_value_states, use_cache, inputs_embeds, decoder_inputs_embeds, head_mask, output_attentions, output_hidden_states) 949 head_mask=head_mask, 950 output_attentions=output_attentions, --> 951 output_hidden_states=output_hidden_states, 952 ) 953 ~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/tfproject/tfenv/lib/python3.7/site-packages/transformers/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_value_states, use_cache, output_attentions, output_hidden_states) 676 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") 677 elif input_ids is not None: --> 678 input_shape = input_ids.size() 679 input_ids = input_ids.view(-1, input_shape[-1]) 680 elif inputs_embeds is not None: ~/tfproject/tfenv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in __getattr__(self, item) 185 return self.data[item] 186 except KeyError: --> 187 raise AttributeError 188 189 def __getstate__(self): AttributeError: ``` The tasks I am working on is: Getting sentence representation ## To reproduce ## Environment info - `transformers` version: - Platform: Ubuntu 18.04 - Python version: python3.7 - PyTorch version (GPU?): 1.5.1 - Tensorflow version (GPU?): 1.14 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
07-02-2020 23:23:33
07-02-2020 23:23:33
Hi! This is because the return of `tokenizer('sentence embeddings from t5 model', return_tensors="pt")` is a dict containing values that can be used by the model. It's not a tensor, so it's not `input_ids`. Change the following line: ```py tokenizer('sentence embeddings from t5 model', return_tensors="pt") ``` to ```py tokenizer('sentence embeddings from t5 model', return_tensors="pt")["input_ids"] ``` to make it work. You can check the documentation [here](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__).
transformers
5,479
closed
Exposing prepare_for_model for both slow & fast tokenizers
With version v3.0.0, two breaking changes that could have been avoided have been introduced. After discussion with @n1to and @thomwolf, this PR aims to revert these changes, by implementing two changes: - The `prepare_for_model` method for both slow and tokenizers is now publicly exposed (it was only the case for the slow tokenizers before v3.0.0) - The truncation methods now default to `longest_first` instead of `first_only` . This PR adds two tests, for both Python and Rust tokenizers: - Assert that `tokenizer.prepare_for_model(tokenizer.encode(x)) == tokenizer.encode_plus(x)` - Assert that the output of `prepare_for_model` for rust and python tokenizers is equal. closes https://github.com/huggingface/transformers/issues/5377 closes https://github.com/huggingface/transformers/issues/5447 closes https://github.com/huggingface/transformers/issues/5460
07-02-2020 21:57:45
07-02-2020 21:57:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=h1) Report > Merging [#5479](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef0e9d806c51059b07b98cb0279a20d3ba3cbc1d&el=desc) will **increase** coverage by `0.43%`. > The diff coverage is `89.53%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5479/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5479 +/- ## ========================================== + Coverage 77.36% 77.80% +0.43% ========================================== Files 141 141 Lines 24617 24632 +15 ========================================== + Hits 19045 19164 +119 + Misses 5572 5468 -104 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <89.28%> (-0.62%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.16% <100.00%> (+0.94%)` | :arrow_up: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+1.25%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.23% <0.00%> (+2.28%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+4.10%)` | :arrow_up: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `55.79% <0.00%> (+27.58%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=footer). Last update [ef0e9d8...ecdc965](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,478
closed
Possible breaking undetected change to "data/processors/squad.py"
# ❓ Questions & Help It looks like the squad data utils hasn't been updated to the new version. https://github.com/huggingface/transformers/blob/fcf0652460753f8a81f7576e8abdaa6b3742f00e/src/transformers/data/processors/squad.py#L136 ``` encoded_dict = tokenizer.encode_plus( # TODO(thom) update this logic truncated_query if tokenizer.padding_side == "right" else span_doc_tokens, span_doc_tokens if tokenizer.padding_side == "right" else truncated_query, truncation="only_second" if tokenizer.padding_side == "right" else "only_first", padding="max_length", max_length=max_seq_length, return_overflowing_tokens=True, stride=max_seq_length - doc_stride - len(truncated_query) - sequence_pair_added_tokens, return_token_type_ids=True, ) ``` In squad utils, `encoded_dict` used to have a 'overflowing_tokens' key, but in the new version, all the overflowing tokens are returned as list of lists. But it looks like the data processor doesn't take this into account ``` if "overflowing_tokens" not in encoded_dict or ( "overflowing_tokens" in encoded_dict and len(encoded_dict["overflowing_tokens"]) == 0 ): break span_doc_tokens = encoded_dict["overflowing_tokens"] ``` So this logic would interpret that there are no overflowing tokens since that key doesn't exist. The code I am working on is using version 2.11.0, doesn't have 'overflowing_tokens' either, but it doesn't have the list of lists like in version 3.0.0. It has something called 'overflow_to_sample_mapping', which returns only zeros, which I am not sure how to use. The 2.11.0 documentation doesn't mention this return value. (https://huggingface.co/transformers/v2.11.0/main_classes/tokenizer.html) . But it looks the data processor wouldn't give overflowing tokens or notice an issue for 2.11.0 as well. And it doesn't seem to have a method to get strided overflowing tokens.
07-02-2020 21:55:25
07-02-2020 21:55:25
Hi, overflowing tokens are handled differently between slow and fast tokenizers with fast tokenizers having better support. Which kind of tokenizer are you using?<|||||>I am using the fast tokenizers <|||||>Ok, currently we don't handle overflowing in fast tokenizers with this processing script. This is on the short term roadmap though.
transformers
5,477
closed
Add DeeBERT (entropy-based early exiting for *BERT)
Add DeeBERT (entropy-based early exiting for *BERT). Paper: https://www.aclweb.org/anthology/2020.acl-main.204/ Based on its original repository: https://github.com/castorini/DeeBERT
07-02-2020 21:38:19
07-02-2020 21:38:19
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=h1) Report > Merging [#5477](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **increase** coverage by `0.41%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5477/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5477 +/- ## ========================================== + Coverage 77.83% 78.25% +0.41% ========================================== Files 141 141 Lines 24634 24634 ========================================== + Hits 19175 19278 +103 + Misses 5459 5356 -103 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.50%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=footer). Last update [58cca47...f44de41](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Btw: would be awesome so see a token classification example πŸ˜…<|||||>Hi @JetRunner, thanks for the review! I have updated according to your suggestions.<|||||>2 checks fail, however they don't seem relevant to my commits.<|||||>@LysandreJik Thanks for the comments and I've updated accordingly!
transformers
5,476
closed
Seq2Seq: Option to not store whole dataset in memory
https://github.com/huggingface/transformers/blob/ef0e9d806c51059b07b98cb0279a20d3ba3cbc1d/examples/seq2seq/utils.py#L93
07-02-2020 20:17:07
07-02-2020 20:17:07
Do you still need help? I can help out and contribute here. <|||||>Yes that would be super helpful! The goal is to avoid using so much CPU memory here: https://github.com/huggingface/transformers/blob/353b8f1e7a7361c0afd9e391381bc226b4a5ca8f/examples/seq2seq/utils.py#L101 by only reading a few batches from disk at a time instead of all at once. Will probably require saving in a different format. Maybe something like this: https://github.com/pytorch/fairseq/blob/f0a61a2774aff2efbc1adb0b5daee346a8401605/fairseq/data/data_utils.py#L55 Let me know if you need more info!<|||||>Great! My idea is to lazily read & encode just the required line numbers from the file when `__getitem__` is called with an index. For this we could create a map of {example number: line numbers} to read. Let me know what you think.<|||||>Sounds reasonable. What does fairseq do?<|||||>Your approach sounds good, feel free to get started. Another approach would be not pad inputs when they are getting cached and then make a batch at load time. <|||||>I think fairseq had the data in multiple files instead of one big one. Sounds good - I am working on it. Will share when I have a tested version. <|||||>Couple of questions : I plan to get rid of the self.source variable; 1. can I get rid of this property?https://github.com/huggingface/transformers/blob/353b8f1e7a7361c0afd9e391381bc226b4a5ca8f/examples/seq2seq/utils.py#L146-L148 2. Any ideas on how to use the sampler without the full dataset? In general shuffling and sampling may be limited with lazy datasets: although you should be able to use a random sampler in your loader. https://github.com/huggingface/transformers/blob/353b8f1e7a7361c0afd9e391381bc226b4a5ca8f/examples/seq2seq/utils.py#L155<|||||>@sshleifer I have a working solution that works for the rest of the training loop and passes the tests. See [here](https://github.com/huggingface/transformers/compare/master...Pradhy729:lazy_loading_seq2seq) Just need input on my points above. Let me know.<|||||>1) You can get rid of `src_lens` and `tgt_lens`, they are unused afaict 2) I would suggest trying to store the len of each example, (tokenized or untokenized, but not padded), and passing that to `SortishSampler` instead of `self.source`, and then changing `def key(self, i): len(self.data[i])` -> `def key(self, i): self.data[i]`. https://github.com/huggingface/transformers/blob/7e86d070c0bbed949b5c922f914f0fec44af72d4/examples/seq2seq/utils.py#L203. <|||||>Like https://github.com/huggingface/transformers/pull/5818<|||||>Got it. Good idea!
transformers
5,475
closed
35 Model Hub entries fail AutoConfig
Here is what I ran: ```python from transformers.hf_api import HfApi from tqdm import tqdm import pandas as pd model_list = HfApi().model_list() model_ids = [x.modelId for x in model_list] from transformers import AutoConfig def check_hub(cls, model_ids): results = {} failure_data = {} for m in tqdm(model_ids): try: cls.from_pretrained(m) results[m] = True except Exception as e: failure_data[m] = e.args results[m] = False return results, failure_data results, failure_data = check_hub(AutoConfig, model_ids) print(failure_data) ``` Results: ```python {'DeBERTa/base': ('Unrecognized model in DeBERTa/base. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'DeBERTa/large': ('Unrecognized model in DeBERTa/large. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'Itcast/cnc_output': ('Unrecognized model in Itcast/cnc_output. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'Narsil/fr_pretrained': ('Unrecognized model in Narsil/fr_pretrained. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'Narsil/pretrained': ('Unrecognized model in Narsil/pretrained. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'Narsil/pretrained2': ('Unrecognized model in Narsil/pretrained2. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'abryee/TigXLNet': ('`d_head` (64) should be equal to `d_model // n_head` (48)',), 'adamlin/ClinicalBert_all_notes': ('Unrecognized model in adamlin/ClinicalBert_all_notes. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'adamlin/ClinicalBert_disch': ('Unrecognized model in adamlin/ClinicalBert_disch. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'adamlin/NCBI_BERT_pubmed_mimic_uncased_base_transformers': ('Unrecognized model in adamlin/NCBI_BERT_pubmed_mimic_uncased_base_transformers. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers': ('Unrecognized model in adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'dccuchile/cased': ('Unrecognized model in dccuchile/cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'dccuchile/uncased': ('Unrecognized model in dccuchile/uncased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'djstrong/bg_cs_pl_ru_cased_L-12_H-768_A-12': ('Unrecognized model in djstrong/bg_cs_pl_ru_cased_L-12_H-768_A-12. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'facebook/dpr-ctx_encoder-single-nq-base': ('dpr',), 'facebook/dpr-question_encoder-single-nq-base': ('dpr',), 'facebook/dpr-reader-single-nq-base': ('dpr',), 'healx/gpt-2-pubmed-large': ('Unrecognized model in healx/gpt-2-pubmed-large. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'healx/gpt-2-pubmed-medium': ('Unrecognized model in healx/gpt-2-pubmed-medium. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'hfl/rbt3': ('Unrecognized model in hfl/rbt3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'hfl/rbtl3': ('Unrecognized model in hfl/rbtl3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'microsoft/unilm-base-cased': ('Unrecognized model in microsoft/unilm-base-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'microsoft/unilm-large-cased': ('Unrecognized model in microsoft/unilm-large-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'mrm8488/prunebert-base-uncased-finepruned-topK-squadv2': ('masked_bert',), 'mrm8488/prunebert-multi-uncased-finepruned-l0-reg-tydiqa-for-xqa': ('masked_bert',), 'mrm8488/prunebert-multi-uncased-finepruned-magnitude-tydiqa-for-xqa': ('masked_bert',), 'mrm8488/prunebert-multi-uncased-finepruned-soft-movement-tydiqa-for-xqa': ('masked_bert',), 'mrm8488/prunebert-multi-uncased-finepruned-topK-tydiqa-for-xqa': ('masked_bert',), 'mrm8488/prunebert-multi-uncased-finepruned-tydiqa-for-xqa': ('masked_bert',), 'oda/music5': ('Unrecognized model in oda/music5. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'pertschuk/0_RoBERTa': ('Unrecognized model in pertschuk/0_RoBERTa. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'radha1258/save': ('Unrecognized model in radha1258/save. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'sshleifer/blenderbot-3B': ('blenderbot',), 'subbareddyiiit/iiit': ('Unrecognized model in subbareddyiiit/iiit. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'subbareddyiiit/tftelugu': ('Unrecognized model in subbareddyiiit/tftelugu. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',)} ```
07-02-2020 20:04:09
07-02-2020 20:04:09
142 AutoTokenizer Failures (the original 35 +107 more). What would help with the `sshleifer` ones (at least) is if I could somehow say "this is the same as the `BartTokenizer` without uploading the same files all over again. Sadly, S3 does not support symlinks. ``` {'DeBERTa/base': ('Unrecognized model in DeBERTa/base. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'DeBERTa/large': ('Unrecognized model in DeBERTa/large. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'Huntersx/cola_model': ("Model name 'Huntersx/cola_model' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'Huntersx/cola_model' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'Itcast/cnc_output': ('Unrecognized model in Itcast/cnc_output. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'JerryQu/v2 distilgpt2': ("Model name 'JerryQu/v2 distilgpt2' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed 'JerryQu/v2 distilgpt2' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'Narsil/fr_pretrained': ('Unrecognized model in Narsil/fr_pretrained. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'Narsil/pretrained': ('Unrecognized model in Narsil/pretrained. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'Narsil/pretrained2': ('Unrecognized model in Narsil/pretrained2. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'PubChimps/dl-bert': ("Model name 'PubChimps/dl-bert' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'PubChimps/dl-bert' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'Tereveni-AI/gpt2-124M-uk-fiction': ('expected str, bytes or os.PathLike object, not NoneType',), 'WikinewsSum/bart-large-multi-combine-wiki-news': ('expected str, bytes or os.PathLike object, not NoneType',), 'WikinewsSum/bert2bert-multi-de-wiki-news': ("Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.",), 'WikinewsSum/bert2bert-multi-en-wiki-news': ("Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.",), 'WikinewsSum/bert2bert-multi-fr-wiki-news': ("Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.",), 'abryee/TigXLNet': ('`d_head` (64) should be equal to `d_model // n_head` (48)',), 'adamlin/ClinicalBert_all_notes': ('Unrecognized model in adamlin/ClinicalBert_all_notes. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'adamlin/ClinicalBert_disch': ('Unrecognized model in adamlin/ClinicalBert_disch. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'adamlin/NCBI_BERT_pubmed_mimic_uncased_base_transformers': ('Unrecognized model in adamlin/NCBI_BERT_pubmed_mimic_uncased_base_transformers. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers': ('Unrecognized model in adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'ahotrod/roberta_large_squad2': ('expected str, bytes or os.PathLike object, not NoneType',), 'aicast/bert_finetuning_test': ('stat: path should be string, bytes, os.PathLike or integer, not NoneType',), 'airKlizz/bart-large-multi-combine-wiki-news': ('expected str, bytes or os.PathLike object, not NoneType',), 'airKlizz/bert2bert-multi-de-wiki-news': ("Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.",), 'airKlizz/bert2bert-multi-en-wiki-news': ("Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.",), 'airKlizz/bert2bert-multi-fr-wiki-news': ("Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.",), 'allegro/herbert-klej-cased-v1': ("Model name 'allegro/herbert-klej-cased-v1' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'allegro/herbert-klej-cased-v1' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'beyhan/checkpoint-3750': ("Model name 'beyhan/checkpoint-3750' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'beyhan/checkpoint-3750' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'camembert/camembert-base': ("Model name 'camembert/camembert-base' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'camembert/camembert-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'castorini/monot5-base-msmarco': ("Model name 'castorini/monot5-base-msmarco' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'castorini/monot5-base-msmarco' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'chrisliu298/arxiv_ai_gpt2': ('expected str, bytes or os.PathLike object, not NoneType',), 'clue/albert_chinese_small': ("Model name 'clue/albert_chinese_small' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'clue/albert_chinese_small' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'clue/albert_chinese_tiny': ("Model name 'clue/albert_chinese_tiny' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'clue/albert_chinese_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'clue/roberta_chinese_3L312_clue_tiny': ("Model name 'clue/roberta_chinese_3L312_clue_tiny' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_3L312_clue_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'clue/roberta_chinese_3L768_clue_tiny': ("Model name 'clue/roberta_chinese_3L768_clue_tiny' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_3L768_clue_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'clue/roberta_chinese_base': ("Model name 'clue/roberta_chinese_base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'clue/roberta_chinese_clue_large': ("Model name 'clue/roberta_chinese_clue_large' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_clue_large' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'clue/roberta_chinese_clue_tiny': ("Model name 'clue/roberta_chinese_clue_tiny' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_clue_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'clue/roberta_chinese_large': ("Model name 'clue/roberta_chinese_large' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_large' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'clue/roberta_chinese_pair_large': ("Model name 'clue/roberta_chinese_pair_large' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_pair_large' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'clue/roberta_chinese_pair_tiny': ("Model name 'clue/roberta_chinese_pair_tiny' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_pair_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'codegram/calbert-base-uncased': ("Model name 'codegram/calbert-base-uncased' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'codegram/calbert-base-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'codegram/calbert-tiny-uncased': ("Model name 'codegram/calbert-tiny-uncased' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'codegram/calbert-tiny-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'damien-ir/discriminator': ("Model name 'damien-ir/discriminator' was not found in tokenizers model name list (google/electra-small-generator, google/electra-base-generator, google/electra-large-generator, google/electra-small-discriminator, google/electra-base-discriminator, google/electra-large-discriminator). We assumed 'damien-ir/discriminator' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'dccuchile/cased': ('Unrecognized model in dccuchile/cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'dccuchile/uncased': ('Unrecognized model in dccuchile/uncased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'denpa92/bert-base-cantonese': ("Model name 'denpa92/bert-base-cantonese' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'denpa92/bert-base-cantonese' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'djstrong/bg_cs_pl_ru_cased_L-12_H-768_A-12': ('Unrecognized model in djstrong/bg_cs_pl_ru_cased_L-12_H-768_A-12. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'dslim23/bert-base-cased-NER-conll-2003': ("Model name 'dslim23/bert-base-cased-NER-conll-2003' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'dslim23/bert-base-cased-NER-conll-2003' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'elgeish/cs224n-squad2.0-distilbert-base-uncased': ('stat: path should be string, bytes, os.PathLike or integer, not NoneType',), 'elgeish/cs224n-squad2.0-roberta-base': ('expected str, bytes or os.PathLike object, not NoneType',), 'facebook/dpr-ctx_encoder-single-nq-base': ('dpr',), 'facebook/dpr-question_encoder-single-nq-base': ('dpr',), 'facebook/dpr-reader-single-nq-base': ('dpr',), 'gaochangkuan/model_dir': ('expected str, bytes or os.PathLike object, not NoneType',), 'google/reformer-enwik8': ("Model name 'google/reformer-enwik8' was not found in tokenizers model name list (google/reformer-crime-and-punishment). We assumed 'google/reformer-enwik8' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'healx/gpt-2-pubmed-large': ('Unrecognized model in healx/gpt-2-pubmed-large. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'healx/gpt-2-pubmed-medium': ('Unrecognized model in healx/gpt-2-pubmed-medium. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'hfl/chinese-roberta-wwm-ext-large': ('expected str, bytes or os.PathLike object, not NoneType',), 'hfl/chinese-roberta-wwm-ext': ('expected str, bytes or os.PathLike object, not NoneType',), 'hfl/rbt3': ('Unrecognized model in hfl/rbt3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'hfl/rbtl3': ('Unrecognized model in hfl/rbtl3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'huseinzol05/bert-base-bahasa-cased': ('stat: path should be string, bytes, os.PathLike or integer, not NoneType',), 'huseinzol05/tiny-bert-bahasa-cased': ('stat: path should be string, bytes, os.PathLike or integer, not NoneType',), 'lhoestq/distilbert-base-uncased-finetuned-absa-as': ("Model name 'lhoestq/distilbert-base-uncased-finetuned-absa-as' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). We assumed 'lhoestq/distilbert-base-uncased-finetuned-absa-as' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'lonePatient/albert_chinese_small': ("Model name 'lonePatient/albert_chinese_small' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'lonePatient/albert_chinese_small' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'lonePatient/roberta_chinese_clue_tiny': ("Model name 'lonePatient/roberta_chinese_clue_tiny' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'lonePatient/roberta_chinese_clue_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'm-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alberto': ("Model name 'm-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alberto' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'm-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alberto' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'microsoft/Multilingual-MiniLM-L12-H384': ('stat: path should be string, bytes, os.PathLike or integer, not NoneType',), 'microsoft/unilm-base-cased': ('Unrecognized model in microsoft/unilm-base-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'microsoft/unilm-large-cased': ('Unrecognized model in microsoft/unilm-large-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'moumeneb1/bert-base-multilingual-cased-ecology_crisis': ("Model name 'moumeneb1/bert-base-multilingual-cased-ecology_crisis' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'moumeneb1/bert-base-multilingual-cased-ecology_crisis' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'moumeneb1/flaubert-base-cased-ecology_crisis': ("Model name 'moumeneb1/flaubert-base-cased-ecology_crisis' was not found in tokenizers model name list (flaubert/flaubert_small_cased, flaubert/flaubert_base_uncased, flaubert/flaubert_base_cased, flaubert/flaubert_large_cased). We assumed 'moumeneb1/flaubert-base-cased-ecology_crisis' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'mrm8488/bert-uncased-finetuned-qnli': ("Model name 'mrm8488/bert-uncased-finetuned-qnli' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/bert-uncased-finetuned-qnli' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'mrm8488/prunebert-base-uncased-finepruned-topK-squadv2': ('masked_bert',), 'mrm8488/prunebert-multi-uncased-finepruned-l0-reg-tydiqa-for-xqa': ('masked_bert',), 'mrm8488/prunebert-multi-uncased-finepruned-magnitude-tydiqa-for-xqa': ('masked_bert',), 'mrm8488/prunebert-multi-uncased-finepruned-soft-movement-tydiqa-for-xqa': ('masked_bert',), 'mrm8488/prunebert-multi-uncased-finepruned-topK-tydiqa-for-xqa': ('masked_bert',), 'mrm8488/prunebert-multi-uncased-finepruned-tydiqa-for-xqa': ('masked_bert',), 'mrm8488/roberta-large-finetuned-wsc': ('expected str, bytes or os.PathLike object, not NoneType',), 'mrm8488/spanbert-base-finetuned-squadv1': ("Model name 'mrm8488/spanbert-base-finetuned-squadv1' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/spanbert-base-finetuned-squadv1' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'mrm8488/spanbert-base-finetuned-squadv2': ("Model name 'mrm8488/spanbert-base-finetuned-squadv2' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/spanbert-base-finetuned-squadv2' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'mrm8488/spanbert-large-finetuned-squadv1': ("Model name 'mrm8488/spanbert-large-finetuned-squadv1' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/spanbert-large-finetuned-squadv1' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'mrm8488/spanbert-large-finetuned-squadv2': ("Model name 'mrm8488/spanbert-large-finetuned-squadv2' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/spanbert-large-finetuned-squadv2' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'oda/music5': ('Unrecognized model in oda/music5. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'patrickvonplaten/reformer-random': ("Model name 'patrickvonplaten/reformer-random' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'patrickvonplaten/reformer-random' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'patrickvonplaten/reformer-tiny-random': ("Model name 'patrickvonplaten/reformer-tiny-random' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'patrickvonplaten/reformer-tiny-random' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'pertschuk/0_RoBERTa': ('Unrecognized model in pertschuk/0_RoBERTa. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'pertschuk/albert-base-squad-classifier-ms': ("Model name 'pertschuk/albert-base-squad-classifier-ms' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'pertschuk/albert-base-squad-classifier-ms' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'pertschuk/albert-base-squad-classifier': ("Model name 'pertschuk/albert-base-squad-classifier' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'pertschuk/albert-base-squad-classifier' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'pertschuk/albert-intent-model-v3': ("Model name 'pertschuk/albert-intent-model-v3' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'pertschuk/albert-intent-model-v3' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'radha1258/save': ('Unrecognized model in radha1258/save. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'ramsrigouthamg/t5_boolean_questions': ("Model name 'ramsrigouthamg/t5_boolean_questions' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'ramsrigouthamg/t5_boolean_questions' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'ramsrigouthamg/t5_paraphraser': ("Model name 'ramsrigouthamg/t5_paraphraser' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'ramsrigouthamg/t5_paraphraser' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'ramsrigouthamg/t5_squad': ("Model name 'ramsrigouthamg/t5_squad' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'ramsrigouthamg/t5_squad' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'ran/c10': ("Model name 'ran/c10' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'ran/c10' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'ran/c9': ("Model name 'ran/c9' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'ran/c9' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'ran/h1': ("Model name 'ran/h1' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'ran/h1' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'ran/y7': ("Model name 'ran/y7' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'ran/y7' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization': ("Model name 'remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'remi/bertabs-finetuned-extractive-abstractive-summarization': ("Model name 'remi/bertabs-finetuned-extractive-abstractive-summarization' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'remi/bertabs-finetuned-extractive-abstractive-summarization' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'remi/bertabs-finetuned-xsum-extractive-abstractive-summarization': ("Model name 'remi/bertabs-finetuned-xsum-extractive-abstractive-summarization' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'remi/bertabs-finetuned-xsum-extractive-abstractive-summarization' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'savasy/checkpoint-1250': ("Model name 'savasy/checkpoint-1250' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'savasy/checkpoint-1250' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'savasy/checkpoint-1875': ("Model name 'savasy/checkpoint-1875' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'savasy/checkpoint-1875' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'savasy/model': ("Model name 'savasy/model' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'savasy/model' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'schmidek/electra-small-cased': ("Model name 'schmidek/electra-small-cased' was not found in tokenizers model name list (google/electra-small-generator, google/electra-base-generator, google/electra-large-generator, google/electra-small-discriminator, google/electra-base-discriminator, google/electra-large-discriminator). We assumed 'schmidek/electra-small-cased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'shauryr/arqmath-roberta-base-1.5M': ("Model name 'shauryr/arqmath-roberta-base-1.5M' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'shauryr/arqmath-roberta-base-1.5M' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'shauryr/arqmath-roberta-base-2M': ("Model name 'shauryr/arqmath-roberta-base-2M' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'shauryr/arqmath-roberta-base-2M' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'shauryr/arqmath-roberta-base': ("Model name 'shauryr/arqmath-roberta-base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'shauryr/arqmath-roberta-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'shauryr/checkpoint-475000': ("Model name 'shauryr/checkpoint-475000' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'shauryr/checkpoint-475000' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'shoarora/alectra-small-owt': ("Model name 'shoarora/alectra-small-owt' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'shoarora/alectra-small-owt' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'spentaur/yelp': ("Model name 'spentaur/yelp' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). We assumed 'spentaur/yelp' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/blenderbot-3B': ('blenderbot',), 'sshleifer/cnn_student_d6': ("Model name 'sshleifer/cnn_student_d6' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/cnn_student_d6' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/mbart-large-cc25': ("Model name 'sshleifer/mbart-large-cc25' was not found in tokenizers model name list (facebook/mbart-large-en-ro, facebook/mbart-large-cc25). We assumed 'sshleifer/mbart-large-cc25' was a path, a model identifier, or url to a directory containing vocabulary files named ['sentencepiece.bpe.model'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/mbart-large-en-ro': ("Model name 'sshleifer/mbart-large-en-ro' was not found in tokenizers model name list (facebook/mbart-large-en-ro, facebook/mbart-large-cc25). We assumed 'sshleifer/mbart-large-en-ro' was a path, a model identifier, or url to a directory containing vocabulary files named ['sentencepiece.bpe.model'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_cnn_12_3': ("Model name 'sshleifer/student_cnn_12_3' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_12_3' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_cnn_12_6': ("Model name 'sshleifer/student_cnn_12_6' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_12_6' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_cnn_12_9': ("Model name 'sshleifer/student_cnn_12_9' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_12_9' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_cnn_6_6': ("Model name 'sshleifer/student_cnn_6_6' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_6_6' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_cnn_9_12': ("Model name 'sshleifer/student_cnn_9_12' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_9_12' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_cnn_9_9': ("Model name 'sshleifer/student_cnn_9_9' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_9_9' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_xsum_12_3': ("Model name 'sshleifer/student_xsum_12_3' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_12_3' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_xsum_12_4': ("Model name 'sshleifer/student_xsum_12_4' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_12_4' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_xsum_12_6': ("Model name 'sshleifer/student_xsum_12_6' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_12_6' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_xsum_12_9': ("Model name 'sshleifer/student_xsum_12_9' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_12_9' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_xsum_3_12': ("Model name 'sshleifer/student_xsum_3_12' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_3_12' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_xsum_6_12': ("Model name 'sshleifer/student_xsum_6_12' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_6_12' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_xsum_6_6': ("Model name 'sshleifer/student_xsum_6_6' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_6_6' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_xsum_9_12': ("Model name 'sshleifer/student_xsum_9_12' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_9_12' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/student_xsum_9_9': ("Model name 'sshleifer/student_xsum_9_9' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_9_9' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/t5-base-cnn': ("Model name 'sshleifer/t5-base-cnn' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'sshleifer/t5-base-cnn' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'sshleifer/tinier_bart': ("Model name 'sshleifer/tinier_bart' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/tinier_bart' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.",), 'subbareddyiiit/iiit': ('Unrecognized model in subbareddyiiit/iiit. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'subbareddyiiit/tftelugu': ('Unrecognized model in subbareddyiiit/tftelugu. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',), 'voidful/albert_chinese_base': ("Model name 'voidful/albert_chinese_base' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_base' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'voidful/albert_chinese_large': ("Model name 'voidful/albert_chinese_large' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_large' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'voidful/albert_chinese_small': ("Model name 'voidful/albert_chinese_small' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_small' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'voidful/albert_chinese_tiny': ("Model name 'voidful/albert_chinese_tiny' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'voidful/albert_chinese_xlarge': ("Model name 'voidful/albert_chinese_xlarge' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_xlarge' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'voidful/albert_chinese_xxlarge': ("Model name 'voidful/albert_chinese_xxlarge' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_xxlarge' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.",), 'wptoux/albert-chinese-large-qa': ('not a string',)} ```<|||||>For reference #3359<|||||>Yes thanks for linking this @patrickvonplaten (I intended to look for this as well) The model pages for those models should already display a (more or less) descriptive message (e.g. https://huggingface.co/djstrong/bg_cs_pl_ru_cased_L-12_H-768_A-12) so I believe we can close this.<|||||>the problem of denpa92/bert-base-cantonese is not solved.<|||||>Is there some way for us to, like, change the config file and make a pull request? I'm not 100% sure how to find the Adam Lin that added ClinicalBert_all_notes and ask him to change it himself...<|||||>> Is there some way for us to, like, change the config file and make a pull request? I'm not 100% sure how to find the Adam Lin that added ClinicalBert_all_notes and ask him to change it himself... I think we would like to enable pull requests on model repositories (cc @julien-c)<|||||>Great to hear, @patrickvonplaten. And sorry for the naive question, but where would I find these repos? I've tried searching around a bit for ClinicalBert_all_notes and I've yet to find it on GitHub...<|||||>@drussellmrichie, on the model hub :) https://huggingface.co/models
transformers
5,474
closed
Can't use AutoModelForCausalLM with bert
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): bert-base-uncased Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] my own modified scripts: (give details below) Here is a simple 3 lines of code you can try to replicate the bug: from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained('bert-base-uncased') model = AutoModelForCausalLM.from_pretrained('bert-base-uncased', is_decoder=True) The tasks I am working on is: XSUM / CNNDM summarization ## To reproduce Steps to reproduce the behavior: 1. run the first 2 lines of code I put in the script section 2. run the first and third line of code I put in the script section <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> If you run the second line of code, you get: AssertionError: If you want to use `BertLMHeadModel` as a standalone, add `is_decoder=True`. If you run the third line of code (add is_decoder=True), you get: TypeError: __init__() got an unexpected keyword argument 'is_decoder' The first error occurs because it creates a default bert-base-uncased config, which does not set is_decoder to True. This is reasonable behavior. The second error occurs because when you pass in is_decoder=True, it correctly gets added to the config, but is incorrectly passed to the model __init__. In this case, BertLMHeadModel's init ONLY takes a config - it does not accept ANY kwargs. Thus we crash. I don't think this is intended behavior - I feel like its reasonable to think you can pass in is_decoder to the config you want to create in AutoModelForCausalLM without crashing. ## Expected behavior I expect if I run the code AutoModelForCausalLM('bert-base-uncased'), I will get back a BertLMHeadModel back with the is_decoder flag set to true in the config. Alternatively, I expect if I run the code AutoModelForCausalLM('bert-base-uncased', is_decoder=True) to get the same result. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.0 - Platform: Linux-3.10.0-862.14.4.el7.x86_64-x86_64-with-centos-7.5.1804-Core - Python version: 3.7.3 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: tried with both - Using distributed or parallel set-up in script?: no
07-02-2020 19:59:14
07-02-2020 19:59:14
Can reproduce :-) Opened a PR to fix it - thanks for the issue @sshearing !
transformers
5,473
closed
TFAutoModelForSequenceClassification: ValueError: Layer #1 (named "classifier") expects 2 weight(s), but the saved weights have 4 element(s).
# πŸ› Bug ## Information `TFAutoModelForSequenceClassification` does not work on v3.0.0 / can't load a model that was working on v2.11.0 ## To reproduce Steps to reproduce the behavior: - This work: ```python !pip install transformers==2.11.0 from transformers import TFAutoModelForSequenceClassification model = TFAutoModelForSequenceClassification.from_pretrained("tblard/tf-allocine") ``` but not this: ```python !pip install transformers>=3.0.0 from transformers import TFAutoModelForSequenceClassification model = TFAutoModelForSequenceClassification.from_pretrained("tblard/tf-allocine") ``` as it outputs: ```shell ValueError: Layer #1 (named "classifier") expects 2 weight(s), but the saved weights have 4 element(s). ``` - Using `TFCamembertForSequenceClassification` instead of `TFAutoModelForSequenceClassification` also works. - I couldn't find any other model using `TFAutoModelForSequenceClassification` on the model zoo to verify the issue does not come from the model itself. ## Expected behavior No errors. ## Environment info Standard Google colab environment. - `transformers` version: 3.0.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cu101 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
07-02-2020 19:11:28
07-02-2020 19:11:28
Hi! Thanks for opening this issue. This should have been fixed by https://github.com/huggingface/transformers/pull/5414.
transformers
5,472
closed
Truncation in GLUE should be longest first
The GLUE example currently crashes with the QQP task because of the truncation. It outputs the following warnings: ``` ERROR:transformers.tokenization_utils:We need to remove 186 to truncate the inputbut the first sequence has a length 25. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'. ERROR:transformers.tokenization_utils:We need to remove 49 to truncate the inputbut the first sequence has a length 34. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'. ERROR:transformers.tokenization_utils:We need to remove 203 to truncate the inputbut the first sequence has a length 42. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'. ERROR:transformers.tokenization_utils:We need to remove 39 to truncate the inputbut the first sequence has a length 28. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'. ERROR:transformers.tokenization_utils:We need to remove 23 to truncate the inputbut the first sequence has a length 20. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'. ERROR:transformers.tokenization_utils:We need to remove 91 to truncate the inputbut the first sequence has a length 63. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'. ``` before crashing with the following: ``` ValueError: expected sequence of length 128 at dim 1 (got 202) ``` closes https://github.com/huggingface/transformers/issues/5460
07-02-2020 17:52:28
07-02-2020 17:52:28
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=h1) Report > Merging [#5472](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/306f1a269504b781f886d75105acabf8ae95bd11&el=desc) will **decrease** coverage by `1.08%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5472/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5472 +/- ## ========================================== - Coverage 77.86% 76.77% -1.09% ========================================== Files 141 141 Lines 24608 24608 ========================================== - Hits 19160 18892 -268 - Misses 5448 5716 +268 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (ΓΈ)` | | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.37% <0.00%> (+25.00%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=footer). Last update [306f1a2...5f25ea3](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,471
closed
Update: ElectraDiscriminatorPredictions forward.
`ElectraDiscriminatorPredictions.forward` should not need `attention_mask`.
07-02-2020 17:47:15
07-02-2020 17:47:15
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=h1) Report > Merging [#5471](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/13a8588f2d70fe78dc36d84829c04fa9d39572d1&el=desc) will **increase** coverage by `1.14%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5471/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5471 +/- ## ========================================== + Coverage 76.77% 77.92% +1.14% ========================================== Files 141 141 Lines 24617 24617 ========================================== + Hits 18900 19183 +283 + Misses 5717 5434 -283 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `80.62% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (ΓΈ)` | | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+1.32%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=footer). Last update [13a8588...42044b4](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,470
closed
Unable to use run_squad with xla_spawn.py on TPU
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Electra Language I am using the model on (English, Chinese ...): ENG The problem arises when using: * [ ] the official example scripts: RUN_squad.py + xla_spawn.py The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name): official SQUaD task ## To reproduce Steps to reproduce the behavior: 1. Install pytorch-xla on colab using: ``` VERSION = "20200325" #@param ["1.5" , "20200325", "nightly"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version $VERSION ``` 2. Trying to run_squad on colab TPUs using xla_spawn.py ``` python examples/xla_spawn.py --num_cores 8 \ examples/question-answering/run_squad.py \ --model_type electra \ --model_name_or_path google/electra-base-discriminator \ --do_train \ --do_eval \ --do_lower_case \ --train_file "/content/drive/My Drive/bert/train.json" \ --predict_file "/content/drive/My Drive/bert/val.json" \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir "/content/drive/My Drive/bert/newdir6" ``` 2. Error is thrown up ``` Traceback (most recent call last): File "examples/xla_spawn.py", line 72, in <module> main() File "examples/xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) AttributeError: module 'run_squad' has no attribute '_mp_fn' ``` ## Expected behavior Training should run properly using xla_spawn.py, which it does for GLUE tasks using: ``` python examples/xla_spawn.py --num_cores 8 \ examples/text-classification/run_glue.py ``` ## Environment info - `transformers` version: 2nd July 2020 clone. - Platform: Colab - Python version: 3.8 - PyTorch version (GPU?): 20200325 (pytorch-xla) - Tensorflow version (GPU?):N - Using GPU in script?:N - Using distributed or parallel set-up in script?:N
07-02-2020 16:45:52
07-02-2020 16:45:52
Hi! The SQuAD example doesn't have trainer support yet. We're in the process of adding it. You can see the supported tasks [here](https://github.com/huggingface/transformers/tree/master/examples#the-big-table-of-tasks), only the tasks with Trainer, TFTrainer or pytorch-lightning support can run on TPU.
transformers
5,469
closed
[Discussion] fix zero divison error (Reformer batch size bug)
This PR is for discussion During the training of the reformer model, I noticed that when you increase the batch size, a zero divison error often occurs With these changes the error no longer occurs
07-02-2020 16:23:45
07-02-2020 16:23:45
transformers
5,468
closed
Fix saved model creation
Fix a bug when a saved model is created the parameters `output_hidden_states` and `output_attentions` was ignored. Reproducibility with TF 2.2: ```python import tensorflow as tf from transformers import TFBertModel, BertTokenizer, BertConfig config = BertConfig.from_pretrained("bert-base-multilingual-uncased", output_hidden_states=True) model = TFBertModel.from_pretrained('bert-base-multilingual-uncased', config=config) tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-uncased", use_fast=False) features = tokenizer.encode_plus("Hello world.", add_special_tokens=True, max_length=48, pad_to_max_length=True, return_tensors="tf", truncation=True) model._saved_model_inputs_spec = None model._set_save_spec(dict(features)) tf.saved_model.save(model, "save/model") ``` Then run the serving CLI with: ``` saved_model_cli show --dir save/model/ --tag_set serve --signature_def serving_default ``` There will be only 2 outputs instead of 3.
07-02-2020 15:06:40
07-02-2020 15:06:40
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=h1) Report > Merging [#5468](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5a0dac53bfd6e69ae64fb3119d607445e1a308d8&el=desc) will **increase** coverage by `0.34%`. > The diff coverage is `93.30%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5468/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5468 +/- ## ========================================== + Coverage 79.33% 79.67% +0.34% ========================================== Files 146 146 Lines 26611 26582 -29 ========================================== + Hits 21111 21180 +69 + Misses 5500 5402 -98 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.50% <0.00%> (-0.60%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <16.66%> (-63.98%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <71.42%> (-1.85%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.87% <88.23%> (+0.22%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `76.73% <89.28%> (+0.26%)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `91.34% <95.83%> (-0.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.75% <96.15%> (-0.03%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.93% <100.00%> (-0.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.84% <100.00%> (-0.01%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.78% <100.00%> (+34.60%)` | :arrow_up: | | ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=footer). Last update [5a0dac5...69ea0de](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Sure! Will try to find a proper test for that.<|||||>Awesome! Fearing many merge conflicts with https://github.com/huggingface/transformers/pull/5395#pullrequestreview-442374095 :D <|||||>I fear the same! We should wait to merge #5395 before to merge this one.<|||||>CirlceCi gives the following message error `Too long with no output (exceeded 10m0s): context deadline exceeded` does it means that the tests takes too long now?<|||||>oof yeah, that's what it means! Do you have any idea of how long the added tests take in your local environment, compared to the full test suite?<|||||>I get around 1.5 min per new test / per model => 3min per model => ~33min but this is on my laptop which is really cheap<|||||>That's a slow test :) We can mark them as slow for now (using the `@slow` decorator) and monitor how long they take. If they take too long, we'll have to think of a different way to test those.<|||||>Ok good to me<|||||>Still have to do some bugfix and once all the models pass the tests I will put the `@slow` decorator.<|||||>Ok, now all the models can be saved in TF saved model format. I put some tests to be sure of that, but they have the `@slow` decorator. This is good to merge to me. Nevertheless, I have done several changes in the input of several layers, @LysandreJik can you check if you are ok with that? Basically, saved models cannot be run with `list / tuple` inputs, because this is very Python specific and cannot be translated into gRPC.<|||||>@jplu Thanks for your work first! The transformers-based model now can be served by Tensorflow Serving. But I still have one question about max_seg_length. In order to make inference faster, is it possible to set max_length to be None? In tf 1.x, I can use the following codes to make serving accept dynamic max_seq_length. ```python estimator = ... def serving_input_receiver_fn(max_seq_len): input_ids = tf.compat.v1.placeholder(shape=[None, max_seq_len], dtype=tf.int32, name='input_ids') input_mask = tf.compat.v1.placeholder(shape=[None, max_seq_len], dtype=tf.int32, name='input_mask') segment_ids = tf.compat.v1.placeholder(shape=[None, max_seq_len], dtype=tf.int32, name='segment_ids') features = {'input_ids': input_ids, 'input_mask': input_mask, 'segment_ids': segment_ids} return tf.estimator.export.build_raw_serving_input_receiver_fn(features) estimator.export_savedmodel(model_output_dir, serving_input_receiver_fn(None)) ``` I found following codes work in TF2.x. Just ignore this message. ```python input_feature = { 'input_ids': tf.TensorSpec(shape=(None, None), dtype=tf.int32, name='input_ids'), 'token_type_ids': tf.TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids'), 'attention_mask': tf.TensorSpec(shape=(None, None), dtype=tf.int32, name='attention_mask') } model._set_save_spec(input_feature) ``` <|||||>Hello! Thanks for you suggestion. The saved model creation is postponed, and will be for a next PR. This one is here for bugfix only. Your code will work fine, but not for all the models and tasks. For example token classification uses a different shape, and DistilBert doesn't have a token_type_ids. Unfortunately, it is a bit more complicated than just putting this piece of code somewhere, it has to be task and model independant.<|||||>@jplu What are the issues in deleting `cast_bool_to_primitive` altogether?<|||||>T5 will not work anymore because the number of output depends on the use_cache parameter. And for now we still want to keep a variable length output. We are currently reworking the outputs approach of all the models the output dictionaries instead of tuples. I will come back on this issue of boolean tensor once this new type of output will be available.<|||||>Indeed, I think this is fine, mostly because I expect people to hack around the hidden layers mostly for the PyTorch implementations. Good for me!<|||||>This is a valuable point indeed @LysandreJik! Nevertheless, unpacking data is not compliant with TensorFlow Autograph as far as I know, basically we loose this usage as it was before.<|||||>Alright, sounds good! Could you resolve the merge conflict and then we merge?<|||||>Fixed!
transformers
5,467
closed
Tokenizer summary
This PR introduces a mid/high-level summary of the different tokenizer types used in the library (a bit like the model summary). Preview is [here](https://56179-155220641-gh.circle-artifacts.com/0/docs/_build/html/tokenizer_summary.html).
07-02-2020 14:22:01
07-02-2020 14:22:01
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=h1) Report > Merging [#5467](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/35befd9ce31c23a774fd34f57bc44033ce70141d&el=desc) will **decrease** coverage by `0.08%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5467/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5467 +/- ## ========================================== - Coverage 77.57% 77.48% -0.09% ========================================== Files 141 141 Lines 24581 24581 ========================================== - Hits 19068 19046 -22 - Misses 5513 5535 +22 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.42% <0.00%> (+1.50%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=footer). Last update [35befd9...80529fa](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,466
closed
Fix typo in glossary
07-02-2020 13:18:15
07-02-2020 13:18:15
transformers
5,465
closed
Fixing missing arguments for TransfoXL tokenizer when using TextGenerationPipeline
As discussed with @LysandreJik and @mfuntowicz , `TextGenerationPipeline` gives imperfect results when using TransfoXL as the tokenizer lacks the `add_space_before_punct_symbol` argument. In order to fix this, this PR overrides `_parse_and_tokenize` for this pipeline in order to pass tokenizer arguments.
07-02-2020 10:18:37
07-02-2020 10:18:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=h1) Report > Merging [#5465](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6726416e4a9780e7a92b5681e1446f15f7ef83d3&el=desc) will **decrease** coverage by `0.12%`. > The diff coverage is `85.71%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5465/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5465 +/- ## ========================================== - Coverage 77.52% 77.40% -0.13% ========================================== Files 141 141 Lines 24610 24617 +7 ========================================== - Hits 19079 19054 -25 - Misses 5531 5563 +32 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.00% <85.71%> (+0.11%)` | :arrow_up: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+33.33%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=footer). Last update [6726416...25f8c86](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,464
closed
Create model card
Create model card for electra-small-discriminator fine-tuned on SQUAD v2.0
07-02-2020 10:13:48
07-02-2020 10:13:48
transformers
5,463
closed
Pre-Trained Model (ipuneetrathore/bert-base-cased-finetuned-finBERT) loads in PyTorch but not Tensorflow
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): TFBertModel Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: This Works: ``` import torch PRE_TRAINED_MODEL_NAME = 'ipuneetrathore/bert-base-cased-finetuned-finBERT' model = BertForSequenceClassification.from_pretrained(PRE_TRAINED_MODEL_NAME) # loads just fine ``` This Does NOT Work: ``` import tensorflow as tf PRE_TRAINED_MODEL_NAME = 'ipuneetrathore/bert-base-cased-finetuned-finBERT' model = TFBertForSequenceClassification.from_pretrained(PRE_TRAINED_MODEL_NAME) # ERROR: OSError: Can't load weights for 'ipuneetrathore/bert-base-cased-finetuned-finBERT'. Make sure that: - 'ipuneetrathore/bert-base-cased-finetuned-finBERT' is a correct model identifier listed on 'https://huggingface.co/models' - or 'ipuneetrathore/bert-base-cased-finetuned-finBERT' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior It should load the model. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.0 - Platform: Ubuntu 18.04 - Python version: 3.7 - PyTorch version (GPU?): 1.3.1 with GPU - Tensorflow version (GPU?): 2.1.0 with GPU - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
07-02-2020 09:37:45
07-02-2020 09:37:45
Hello! That's because the user that uploaded that model didn't upload a TensorFlow version, only a PyTorch version. You can see it when you click on "show all files", you'll see that there is a `pytorch_model.bin`, but no `tf_model.h5`. Here you can solve this by telling the TF model that you want to load from pytorch weights: ```py import tensorflow as tf PRE_TRAINED_MODEL_NAME = 'ipuneetrathore/bert-base-cased-finetuned-finBERT' model = TFBertForSequenceClassification.from_pretrained(PRE_TRAINED_MODEL_NAME, from_pt=True) # <-- here ```<|||||>and you could also ask the author (I believe @ipuneetrathore) if they could upload a TF version of the weights<|||||>hi @julien-c just wondering if there are any difference if the pytorch weights could be loaded through TF model anyway?<|||||>Just that the PyTorch weights will have to be converted on the fly every time you instantiate your TF model
transformers
5,462
closed
Changed expected_output_ids in TransfoXL generation test
#4826 fixed TransfoXL's `prepare_inputs_for_generation` function. This PR changes the expected outputs in the TransfoXL generation test to match the new correct outputs.
07-02-2020 09:24:42
07-02-2020 09:24:42
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=h1) Report > Merging [#5462](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/35befd9ce31c23a774fd34f57bc44033ce70141d&el=desc) will **decrease** coverage by `0.93%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5462/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5462 +/- ## ========================================== - Coverage 77.57% 76.63% -0.94% ========================================== Files 141 141 Lines 24581 24581 ========================================== - Hits 19068 18838 -230 - Misses 5513 5743 +230 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.68% <0.00%> (+2.76%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=footer). Last update [35befd9...89278a5](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,461
closed
[Reformer] combine reformer model with other tokenizers
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): reformer Language I am using the model on (English, Chinese ...): english The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) it's based on the reformer mlm notebook, with tokenizer from t5 or roberta The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) masked language modeling ## To reproduce Steps to reproduce the behavior: 1. replace the tokenizer from reformer with t5 or roberta 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> <pre><code> File "train_reformer.py", line 163, in <module> trainer.train() File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/a-ware/.local/lib/python3.8/site-packages/apex/amp/_initialize.py", line 196, in new_fwd output = old_fwd(*applier(args, input_caster), File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 1853, in forward reformer_outputs = self.reformer( File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 1623, in forward encoder_outputs = self.encoder( File "/home/a-ware/.local/lib/python3.8/site-packages/torch/n wandb: Waiting for W&B process to finish, PID 142384 n/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 1368, in forward hidden_states = _ReversibleFunction.apply( File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 1267, in forward layer_outputs = layer( File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 1145, in forward attn_outputs = self.attention( File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 10wandb: Program failed with code 1. Press ctrl-c to abort syncing. 06, in forward self_attention_outputs = self.self_attention( File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 805, in forward query_vectors = self.query(hidden_states) File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 87, in forward return F.linear(input, self.weight, self.bias) File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1612, in linear output = input.matmul(weight.t()) RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` wandb: Process crashed early, not syncing files </code></pre> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: master - Platform: - Python version: 3.8 - PyTorch version (GPU?): 1.4 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
07-02-2020 09:05:33
07-02-2020 09:05:33
I'm not sure I completely understand your process. You're loading a Reformer model - which one, with which checkpoint? You want to use another tokenizer. Which one, loaded from which checkpoint?<|||||>I am using this notebook: https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb The tokenizer I am using is the t5-large from the modelhub. Restarting my complete System solved the problem, seems like there was an error with cuda or the anaconda enviroment
transformers
5,460
closed
BERT Huggingface trainer api: ValueError: expected sequence of length 128 at dim 1 (got 314)
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I'm using new trainer api in HuggingFace Transformers to train on a GLUE task (QQP). This error shows up during training. This is the example I'm using https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb Only change I made is the task. In the notebook GLUE task is MNLI which I changed to QQP. While original MNLI task runs without errors, QQP task fails. ValueError: expected sequence of length 128 at dim 1 (got 314) <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> Please check my Stackoverflow question for more details **https://stackoverflow.com/questions/62675482/bert-huggingface-trainer-api-valueerror-expected-sequence-of-length-128-at-dim**:
07-02-2020 05:54:29
07-02-2020 05:54:29
I can reproduce! Thank you for opening an issue, I'm looking into it now.<|||||>Same error (StackOverflow--> https://stackoverflow.com/questions/67004233/typeerror-zeros-like-argument-input-when-fine-tuning-on-mlm) what was the fix in the end @LysandreJik ?<|||||>@LysandreJik Is there any way to fix this (for what I presume is a model pretrained on the old HuggingFace version)?<|||||>Sorry just seeing this now - @neel04 are you still facing the issue? @msamogh can you open a new issue and fill in the issue template (with full error, environment, code run)? Thanks<|||||>I don't really remember what I was trying to do :sweat_smile: Sorry, couldn't help you more. I think the problem was some changes in the API, while the example notebooks weren't updated at that time - so the `max_length` argument (which took `int`) didn't work leading to that error. Now, its changed to a `str` whic represents padding strategy - and since it works now with my current problem, I personally don't think the issue remains anymore :hugs: I haven't done the training though, so I would surely update you if I encouter it again! **EDIT:** My training works (albeit with small batch size), so you must have processed your data wrongly @msamogh. Check out the updated example notebooks to get an idea on how to build your datasets with the :hugs: `Datasets` lib
transformers
5,459
closed
Error while saving model: TypeError: ('Not JSON Serializable:', DistilBertConfig
# πŸ› Bug ## Information In this problem, I am using the pre-trained **distillbert** model embedding to build a custom model (See the code snippet below). Everything works perfectly fine except saving the model (See error below). I am using the latest version of the transformer, which is 3.0.0. I could not even save the same model when using the last version 2.11 (see this issue: [https://github.com/huggingface/transformers/issues/4444](https://github.com/huggingface/transformers/issues/4444)). I was just wondering if you could help me solve the problem. ## Code ``` config = DistilBertConfig.from_pretrained( 'distilbert-base-uncased') config.output_hidden_states = False distillbert_main = TFDistilBertMainLayer(config = config) input_word_ids = tf.keras.layers.Input(shape=(8,), dtype = tf.int32, name = "input_word_ids"), x = distillbert_main(input_word_ids)[0] x = tf.keras.layers.Lambda(lambda seq: seq[:, 0, :])(x) x = tf.keras.layers.BatchNormalization()(x) x = tf.keras.layers.Dropout(0.2)(x) out = tf.keras.layers.Dense(2)(x) model = tf.keras.Model(inputs=input_word_ids, outputs=out) for layer in model.layers[:3]: layer.trainable = False model.summary() # Works fine model.get_config() # Works fine model.save('./model.h5') # Does not work and produce error ``` ## Error ``` TypeError Traceback (most recent call last) <ipython-input-32-1fbe6dabead0> in <module> ----> 1 model.save('./model.h5') /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options) 1050 """ 1051 save.save_model(self, filepath, overwrite, include_optimizer, save_format, -> 1052 signatures, options) 1053 1054 def save_weights(self, filepath, overwrite=True, save_format=None): /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options) 133 'or using `save_weights`.') 134 hdf5_format.save_model_to_hdf5( --> 135 model, filepath, overwrite, include_optimizer) 136 else: 137 saved_model_save.save(model, filepath, overwrite, include_optimizer, /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py in save_model_to_hdf5(model, filepath, overwrite, include_optimizer) 111 if isinstance(v, (dict, list, tuple)): 112 f.attrs[k] = json.dumps( --> 113 v, default=serialization.get_json_type).encode('utf8') 114 else: 115 f.attrs[k] = v /usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw) 236 check_circular=check_circular, allow_nan=allow_nan, indent=indent, 237 separators=separators, default=default, sort_keys=sort_keys, --> 238 **kw).encode(obj) 239 240 /usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/encoder.py in encode(self, o) 197 # exceptions aren't as detailed. The list call should be roughly 198 # equivalent to the PySequence_Fast that ''.join() would do. --> 199 chunks = self.iterencode(o, _one_shot=True) 200 if not isinstance(chunks, (list, tuple)): 201 chunks = list(chunks) /usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/encoder.py in iterencode(self, o, _one_shot) 255 self.key_separator, self.item_separator, self.sort_keys, 256 self.skipkeys, _one_shot) --> 257 return _iterencode(o, 0) 258 259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, /usr/local/lib/python3.7/site-packages/tensorflow/python/util/serialization.py in get_json_type(obj) 74 return obj.__wrapped__ 75 ---> 76 raise TypeError('Not JSON Serializable:', obj) TypeError: ('Not JSON Serializable:', DistilBertConfig { "activation": "gelu", "architectures": [ "DistilBertForMaskedLM" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "vocab_size": 30522 } ) ``` - `transformers` version: 3.0.0 - Platform: Mac OSX - Python version: 3.7 - PyTorch version (GPU?): No - Tensorflow version: 2.2.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: NO
07-02-2020 05:17:44
07-02-2020 05:17:44
Hi! The way to save the transformers model is using the `save_pretrained` method, which saves both the configuration and the model as an h5 file. Can you try using it instead?<|||||>> Hi! The way to save the transformers model is using the `save_pretrained` method, which saves both the configuration and the model as an h5 file. Can you try using it instead? I am not saving the "transformers model" instead use it as a top layer of a Keras model. Then error occurs when saving the model that includes the "transformers model."<|||||>Ok, maybe @jplu or @patrickvonplaten can have a look when they have some bandwidth.<|||||>As a first glance, I can say that it is "normal" because the `DistilBert` model has a config parameter, which doesn't make it compliant with sequential models. Create a subclass model instead to see if it works. But this is just a quick guess, I will check it deeper when have some time.<|||||>I found that, model could be save in tensorflow saved_model using: `tf.saved_model.save(model, './models/model')` However, I was not able to save in Keras .h5 format. That's fine for me now. So, I close this issue.
transformers
5,458
closed
πŸ› Can't use `AutoTokenizer` with `sshleifer/mbart-large-cc25`
# πŸ› Bug From [`sshleifer/mbart-large-cc25`](https://huggingface.co/sshleifer/mbart-large-cc25) : ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("sshleifer/mbart-large-cc25") ``` --- Running this code yield an error : >OSError: Model name 'sshleifer/mbart-large-cc25' was not found in tokenizers model name list (facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum). We assumed 'sshleifer/mbart-large-cc25' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. --- Which Tokenizer should I use with this model ? @sshleifer
07-02-2020 05:05:33
07-02-2020 05:05:33
Ha, I didn't know the Tokenizer between Bart and mBart is different. I just noticed there is a class `MBartTokenizer`. It seems like this class is not documented on [HuggingFace documentation](https://huggingface.co/transformers/model_doc/bart.html). Maybe we should consider adding it ? _Also the model card for `sshleifer/mbart-large-cc25` may need an update_ --- Also, the following is still not working : ```python from transformers import MBartTokenizer tokenizer = MBartTokenizer.from_pretrained("sshleifer/mbart-large-cc25") ``` Only `tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro')` seems to work. **Can I use Tokenizer from checkpoint `facebook/mbart-large-en-ro` for the model `sshleifer/mbart-large-cc25` ?**<|||||>mbart-large-cc25 does not work well yet, the PR is still open #3513. Nonetheless these are all things I should fix, thanks!<|||||>For your second question, no. At the moment that tokenizer will not work well. Do you have fairseq cc25 working well? **Update:** just moved it to `facebook/mbart-large-cc25`. AutoTokenizer should work. <|||||>I didn't try the fairseq model, went directly for HF implementation ^^
transformers
5,457
closed
[Bart] enable test_torchscript, update test_tie_weights
This sets `test_torchscript=True` for BART and removes unneeded asserts in `test_tie_weights`.
07-02-2020 04:06:51
07-02-2020 04:06:51
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=h1) Report > Merging [#5457](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/306f1a269504b781f886d75105acabf8ae95bd11&el=desc) will **decrease** coverage by `0.28%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5457/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5457 +/- ## ========================================== - Coverage 77.86% 77.57% -0.29% ========================================== Files 141 141 Lines 24608 24608 ========================================== - Hits 19160 19089 -71 - Misses 5448 5519 +71 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `32.43% <0.00%> (-55.86%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.18% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.37% <0.00%> (+25.00%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=footer). Last update [306f1a2...e08f8b7](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,456
closed
Add description of required special symbols
07-02-2020 03:13:27
07-02-2020 03:13:27
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=h1) Report > Merging [#5456](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/306f1a269504b781f886d75105acabf8ae95bd11&el=desc) will **decrease** coverage by `1.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5456/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5456 +/- ## ========================================== - Coverage 77.86% 76.80% -1.06% ========================================== Files 141 141 Lines 24608 24608 ========================================== - Hits 19160 18901 -259 - Misses 5448 5707 +259 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `70.76% <0.00%> (-13.08%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.15% <0.00%> (-6.29%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.89% <0.00%> (-1.33%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.37% <0.00%> (+25.00%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=footer). Last update [306f1a2...8ce649f](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,455
closed
How to batch encode sentences using BertTokenizer?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I would like to create a minibatch by encoding multiple sentences using transformers.BertTokenizer. How can I do it? I tried following code. ``` from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') tokenizer.encode('this is the first sentence') >>> [2023, 2003, 1996, 2034, 6251] tokenizer.encode(['this is the first sentence', 'another setence']) >>> [100, 100] # expecting 7 tokens ``` <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**: https://stackoverflow.com/questions/62669261/how-to-encode-multiple-setence-using-transformers-berttokenizer
07-02-2020 02:36:01
07-02-2020 02:36:01
Hi @RayLei Have a look at this https://huggingface.co/transformers/preprocessing.html
transformers
5,454
closed
Error while saving Longformer pre-trained model
Thanks for the transformers library! ## Information I am trying to finetune a pre-trained model of type `LongformerForQuestionAnswer` on a custom QA dataset using a custom script morphed from `run_squad.py`. The pre-trained model is `allenai/longformer-large-4096-finetuned-triviaqa` While saving the pretrained model, I run into the following error: ``` Traceback (most recent call last): File "examples/question-answering/run_nq.py", line 809, in <module> main() File "examples/question-answering/run_nq.py", line 752, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "examples/question-answering/run_nq.py", line 248, in train tokenizer.save_pretrained(output_dir) File "/home/danishp/git/explain-qa/src/third_party/transformers/src/transformers/tokenization_utils_base.py", line 1368, in save_pretrained write_dict[key] = value.__getstate__() AttributeError: 'AddedToken' object has no attribute '__getstate__' ```
07-02-2020 00:30:45
07-02-2020 00:30:45
+1, I got the same error.<|||||>Hi, do you mind pasting your environment information? Especially related to your transformers and tokenizers versions.<|||||>Hi @LysandreJik, thanks for checking in. I am using the version 2.11.0 of the transformers library, and tokenizers==0.7.0. Following is the associated [config file](https://s3.amazonaws.com/models.huggingface.co/bert/allenai/longformer-large-4096-finetuned-triviaqa/config.json). It doesn't say much about the tokenizer version, but I think the tokenizers are too loaded from `LongformerForQuestionAnswering.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa")` ``` { "architectures": [ "LongformerForQuestionAnswering" ], "attention_mode": "longformer", "attention_probs_dropout_prob": 0.1, "attention_window": [ 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512 ], "bos_token_id": 0, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "ignore_attention_mask": false, "initializer_range": 0.02, "intermediate_size": 4096, "layer_norm_eps": 1e-05, "max_position_embeddings": 4098, "model_type": "longformer", "num_attention_heads": 16, "num_hidden_layers": 24, "pad_token_id": 1, "sep_token_id": 2, "type_vocab_size": 1, "vocab_size": 50265 } ```<|||||>A simple way to reproduce the problem is the following: ```python import transformers from transformers import * tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096") tokenizer.save_pretrained("~/") ```<|||||>I think I found out where the problem lies: ```python tokenizer.special_tokens_map_extended.items() ``` There are these special tokens which are instances of `AddedToken` which do not have a `__getstate__` function which is called in line 1368 of `tokenization_utils_base.py` ` dict_items([('bos_token', AddedToken("<s>", rstrip=False, lstrip=False, single_word=False)), ('eos_token', AddedToken("</s>", rstrip=False, lstrip=False, single_word=False)), ('unk_token', AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False)), ('sep_token', AddedToken("</s>", rstrip=False, lstrip=False, single_word=False)), ('pad_token', AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False)), ('cls_token', AddedToken("<s>", rstrip=False, lstrip=False, single_word=False)), ('mask_token', AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False))]) `<|||||>Hmmm, I can't reproduce on my end with your versions. Three questions: - Did you install from source? If you did, it's possible that you have some tokenizer changes that were intended for version 3.0.0. In that case, could you try installing tokenizers==0.8.0, that has the necessary changes to handle that? - Is it possible for you to reinstall both transformers and tokenizers to check? `pip install -U transformers==2.11.0` and `pip install -U tokenizers==0.8.0` - **If all else fails, is it a possibility for you to install the latest versions? A simple `pip install -U transformers` should take care of it.** Let me know if any of these fix your issue.<|||||>Actually, I never pip-installed the tranformers library, I am just running the cloned github code from a few days ago (this is because I had to edit some parts of the code for my use case). However, when I pip installed these versions, surprisingly, I don't see this error. As you suggest, it is possible that some tokenizer changes that were intended for version 3.0.0 crept in. In the cloned code that I am using, if I change the following line to: https://github.com/huggingface/transformers/blob/ef0e9d806c51059b07b98cb0279a20d3ba3cbc1d/src/transformers/tokenization_utils_base.py#L1368 ```python write_dict[key] = value.content # instead of __getstate__() ``` The problem is fixed. <|||||>> Actually, I never pip-installed the tranformers library, I am just running the cloned github code from a few days ago (this is because I had to edit some parts of the code for my use case). > > However, when I pip installed these versions, surprisingly, I don't see this error. As you suggest, it is possible that some tokenizer changes that were intended for version 3.0.0 crept in. > > In the cloned code that I am using, if I change the following line to: > > https://github.com/huggingface/transformers/blob/ef0e9d806c51059b07b98cb0279a20d3ba3cbc1d/src/transformers/tokenization_utils_base.py#L1368 > > ```python > write_dict[key] = value.content # instead of __getstate__() > ``` > > The problem is fixed. Just had the same issue with version 3.0.2 while fine-tuning the Robert-base model. Guess, it would have been the same with other BERT-base models. Changing this line solved the issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>`pip install -U tokenizers==0.8.0` solved this!!!!
transformers
5,453
closed
The output to be used for getting sentence embeddings from BERT
What is the output that we should be using to get embeddings for a sentence using BERT? When I load the pre-trained BERT model ([BertModel](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel)) from huggingface for inference, should I be using the `pooler_output', the output of the last hidden layer or something else? While fine-tuning BERT, which huggingface module should be used for getting sentence embeddings? Is it the [BertForSequenceClassification](https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification), [BertForMaskedLM](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm), [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel), or some other module?
07-02-2020 00:04:40
07-02-2020 00:04:40
Hi @AkshitaJha , what is your downstream task ? Also you may wanna try this out for sentence embeddings https://huggingface.co/deepset/sentence_bert https://github.com/UKPLab/sentence-transformers <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,452
closed
Text Classification with PyTorch Lightning: 'dict' object has no attribute 'task'
Hi, after manually resolving the `n_gpu` attribute issue in `lightning_base.py` (see #5385), I found another strange behaviour in the Text Classification example. I used PL in version *0.8.1* with the `run_pl.sh` script. Training works, but after reloading the model for evaluation, the following error message is thrown: ```bash Traceback (most recent call last): File "run_pl_glue.py", line 189, in <module> model = model.load_from_checkpoint(checkpoints[-1]) File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/saving.py", line 171, in load_from_checkpoint model = cls._load_model_state(checkpoint, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/saving.py", line 201, in _load_model_state model = cls(*args, **kwargs) File "run_pl_glue.py", line 28, in __init__ hparams.glue_output_mode = glue_output_modes[hparams.task] AttributeError: 'dict' object has no attribute 'task' ``` I did some debugging. So the interesting part is in the constructor: https://github.com/huggingface/transformers/blob/306f1a269504b781f886d75105acabf8ae95bd11/examples/text-classification/run_pl_glue.py#L26-L30 For training (first initialization), the `hparams` variable outputs: ```python Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='./glue_data/MRPC/', do_predict=True, do_train=True, eval_batch_size=32, fast_dev_run=False, fp16=True, fp16_opt_level='O1', gpus=1, gradient_accumulation_steps=1, learning_rate=2e-05, max_grad_norm=1.0, max_seq_length=128, model_name_or_path='bert-base-cased', n_tpu_cores=0, num_train_epochs=1, num_workers=4, output_dir='/mnt/transformers-pl/examples/text-classification/mrpc-pl-bert', overwrite_cache=False, resume_from_checkpoint=None, seed=2, task='mrpc', tokenizer_name=None, train_batch_size=32, val_check_interval=1.0, warmup_steps=0, weight_decay=0.0) ``` Notice the type: it is a `Namespace`. After training... and re-loading the model checkpoint, `hparams` looks like: ```python {'output_dir': '/mnt/transformers-pl/examples/text-classification/mrpc-pl-bert', 'fp16': True, 'fp16_opt_level': 'O1', 'fast_dev_run': False, 'gpus': 1, 'n_tpu_cores': 0, 'max_grad_norm': 1.0, 'do_train': True, 'do_predict': True, 'gradient_accumulation_steps': 1, 'seed': 2, 'resume_from_checkpoint': None, 'val_check_interval': 1.0, 'model_name_or_path': 'bert-base-cased', 'config_name': '', 'tokenizer_name': None, 'cache_dir': '', 'learning_rate': 2e-05, 'weight_decay': 0.0, 'adam_epsilon': 1e-08, 'warmup_steps': 0, 'num_workers': 4, 'num_train_epochs': 1, 'train_batch_size': 32, 'eval_batch_size': 32, 'max_seq_length': 128, 'task': 'mrpc', 'data_dir': './glue_data/MRPC/', 'overwrite_cache': False, 'glue_output_mode': 'classification'} ``` It's strange, because it is now a normal dictionary so `hparams.task` is not working 😒 @sshleifer could you help with that issue πŸ€”
07-02-2020 00:04:01
07-02-2020 00:04:01
You could manually cast it to a namespace with ```python argparse.Namespace(**ckpt["hparams"]) ``` But @williamFalcon may have a cleaner solution <|||||>I added it with a very *very* dirty fix, in GLUETransformer init added this to avoid cast it to Namespace if it was a dict `if type(hparams) is dict: hparams = Namespace(**hparams) `<|||||>The official way to do this is to call `self.save_hyperparameters(hparams)` in the constructor of the module - then the hyperparameters will be accessible through `self.hparams['some_param']` and `self.hparams.some_param` as well.<|||||>@nagyrajmund Hey, but that looks like it does not solve the issue. Even without save_hyperparameters() call, it will save the hparams in the checkpoint and the yaml file.<|||||>Hey-hey, I think you misunderstood me, my proposed fix is to replace [this line](https://github.com/huggingface/transformers/blob/33d7506ea10ca92886fd1bb3b5306a1a720c58fe/examples/lightning_base.py#L59) with `self.save_hyperparameters(hparams)`. Then the hparams will be loaded correctly from the checkpoint without changing any other functionality in the module. Let me know if you run into any issues :)<|||||>@nateraw @borda <|||||>the conclusion after sharing min exmple is missing `self.save_hyperparameters()` in init https://pytorch-lightning.slack.com/archives/CRBLFHY79/p1595502354412700<|||||>*EDIT: Does not work as intended, please check the other comments* > @nagyrajmund Hey, but that looks like it does not solve the issue. Even without save_hyperparameters() call, it will save the hparams in the checkpoint and the yaml file. It does work, i think as @Borda mentioned the example is missing that. Among, `gpus` parameter and `load_datasets()` functions were the issues. <|||||>@bhashithe mind share the code or is it this example? transformers/examples/text-classification/run_pl_glue.py<|||||>*EDIT: Does not work as intended, please check the other comments* @Borda It is actually the example, but i had to alter both lightning_base.py and run_pl_glue.py to get it to work.<|||||>would you mind sending a PR with your fix @bhashithe ?<|||||>No problem, let me send that now.<|||||>Sorry @Borda that save_hyperparameters() fix does not work @nagyrajmund Small oversight on my part, anyway i have it working by resetting hparams to be a Namespace().<|||||>Created #6027 with fixes.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,451
closed
TF: inputs vs input_ids
Why should TF encoder_decoder models take inputs instead of input_ids ? https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_common.py#L328 @patrickvonplaten
07-01-2020 23:17:26
07-01-2020 23:17:26
Think it's required for some weird keras inner workings, or @LysandreJik ? I remember I had to change them to `inputs` in TF T5 at some point as well.<|||||>### Background for Keras inner workings: (taken from the docs) TF 2.0 models accepts two formats as inputs: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional arguments. If you choose the second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument : - a single Tensor with input_ids only and nothing else: `model(inputs_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({'input_ids': input_ids, 'token_type_ids': token_type_ids})` The first argument name is therefore more appropriate as `inputs` rather than `input_ids`, since it can contain all the inputs. This is why you can see such a snippet at the beginning of each transformer layer, in order to gather all inputs: ```py if isinstance(inputs, (tuple, list)): input_ids = inputs[0] past = inputs[1] if len(inputs) > 1 else past attention_mask = inputs[2] if len(inputs) > 2 else attention_mask token_type_ids = inputs[3] if len(inputs) > 3 else token_type_ids [...] assert len(inputs) <= 10, "Too many inputs." elif isinstance(inputs, (dict, BatchEncoding)): input_ids = inputs.get("input_ids") past = inputs.get("past", past) attention_mask = inputs.get("attention_mask", attention_mask) token_type_ids = inputs.get("token_type_ids", token_type_ids) [...] assert len(inputs) <= 10, "Too many inputs." else: input_ids = inputs ``` ### Actual reason why things are done this way in the tests: It stems from that PR: https://github.com/huggingface/transformers/pull/3547. Previously it was written in the T5 forward pass as `decoder_input_ids`, while it could be a dict and, therefore, contain everything. Looking at it now, I guess it could be put as `input_ids` too (since it's a positional argument, the naming doesn't really matter). <|||||>T5 is supporting ```python def call(inputs, **kwargs): if isinstance(inputs, dict): kwargs.update(inputs) else: kwargs["inputs"] = inputs # retrieve arguments inputs = kwargs.get("inputs", None) ... ``` I will try to kwarg everything, because to me this is an explosion of input types and boilerplate.<|||||>I see what you mean now @sshleifer! Yes you are right in T5 the name was wrong IMO. Fixing this now in a bigger TF refactor PR.<|||||>So as you said this line: https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_common.py#L328 should be changed to just: ``` input_ids = inputs_keywords.pop("input_ids", None) ``` <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,450
closed
Add Reformer MLM notebook
adds a simple notebook on how to do MLM with Reformer
07-01-2020 22:19:22
07-01-2020 22:19:22
transformers
5,449
closed
Guide to fixed-length model perplexity evaluation
This post / guide is inspired by this recent [Twitter discussion](https://twitter.com/myleott/status/1245840363262283776) and [this gist](https://gist.github.com/myleott/cdf685b8b3ce20b0221e1842782bce74) on the different ways that perplexity can be evaluated and the optimal strategy of a strided "sliding window". Interested in feedback both on the guide/writing component as well as the theoretical discussion on PPL. Right now my understanding is that our language modeling script uses non-overlapping segments rather than the sliding window. Relevant to #4415, #4219.
07-01-2020 22:05:44
07-01-2020 22:05:44
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=h1) Report > Merging [#5449](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d16e36c7e525aab4c08a6e60a7478e209498dc14&el=desc) will **increase** coverage by `0.86%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5449/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5449 +/- ## ========================================== + Coverage 77.82% 78.68% +0.86% ========================================== Files 141 141 Lines 24608 24608 ========================================== + Hits 19150 19364 +214 + Misses 5458 5244 -214 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.68% <0.00%> (+0.50%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=footer). Last update [d16e36c...b3dae20](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,448
closed
grammar corrections and train data update
- fixed grammar and spelling - added an intro - updated Training data references
07-01-2020 21:07:45
07-01-2020 21:07:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=h1) Report > Merging [#5448](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d16e36c7e525aab4c08a6e60a7478e209498dc14&el=desc) will **decrease** coverage by `0.86%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5448/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5448 +/- ## ========================================== - Coverage 77.82% 76.95% -0.87% ========================================== Files 141 141 Lines 24608 24608 ========================================== - Hits 19150 18938 -212 - Misses 5458 5670 +212 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=footer). Last update [d16e36c...20f340c](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,447
closed
Where did "prepare_for_model" go? What is the replacement?
I'm working with already numericalized data (e.g., where the text has been converted to ids via `tokenizer.tokenize()`) and was using `prepare_for_model` to build the appropriate input dictionary ... ***but*** that method is gone in 3.0. So ... what should I use/do now? Thanks
07-01-2020 19:20:34
07-01-2020 19:20:34
Hi! Why were you using `tokenize` + `prepare_for_model` instead of `encode`/`encode_plus` ? Let's see how to fit your use-case with the best approach!<|||||>Sure. I'm the developer of [this library](https://ohmeow.github.io/blurr/) which integrates huggingface with fastai. Probably the best thing is to look at the code [here](https://github.com/ohmeow/blurr/blob/master/blurr/data/core.py) to see what I'm doing. The fastai bits don't work all that well with inputs composed of multiple tensors, and so my initial fastai transform converts the text to ids, which are then wrapped in a tensor to make fastai happy. Before batches are created, I would use `prepare_for_model` to get the necessary transformer inputs (e.g. input_ids, attention_mask, etc...), pad to max length, etc..., using those ids. @sgugger may have some thoughts on how to adapt my code better to v3 given he wrote most of those pieces in fastai :) Thanks!<|||||>@LysandreJik Would it be possible that you link this issue in the "breaking changes" section for the 3.0.0 release πŸ€” In Flair we had the same issue :)<|||||>Hmm, I see, let me investigate. Does using the private `_prepare_for_model` solve your issue? I'll ping @n1t0 as well as he might have more info on that front. @stefan-it, just did! Thank you.<|||||>Ok, indeed I'll add it as a breaking change also we could expose it publicly again in a 3.0.1 if it happens that many people were using it. The main reason I made it private is that we don't have it in fast tokenizers (though we could maybe work on having it) and I'm trying to have both APIs come closer to each others. @n1t0 do you think we could provide an implementation of this method in Fast tokenizers? It's basically all the post-processing (truncation + merging pairs + padding) after the conversion in integer indices.<|||||>We should be able to expose the post-processing for both `List[str]` and `List[int]` in `tokenizers`, but I'll have to check. I think the only problem in doing so is that all the mappings (chars <=> tokens <=> words) and offsets won't make any sense in this case.<|||||>Ok, we will release a patch to fix this breaking change (re-expose `prepare_for_models` for both slow and fast tokenizers with backward-compatible API) plus the one mentioned in #5377 probably tomorrow or early next week.
transformers
5,446
closed
Reformer language modeling using run_language_modeling.py: sentences didn't pad to max_length
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Reformer Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) - using run_language_modeling.py The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) - language modeling using wikitext2 ## To reproduce Steps to reproduce the behavior: 1. Using wikitext2 2. run ` python run_language_modeling.py \ --output_dir=output \ --model_type=reformer \ --config_name=google/reformer-crime-and-punishment \ --tokenizer_name=google/reformer-crime-and-punishment \ --line_by_line \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE ` 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Should start training, but got ValueError: ValueError: If training, sequence Length 444 has to be a multiple of least common multiple chunk_length 64. Please consider padding the input to a length of 448. Seems tokenizer didn't pad to max_length in LineByLineTextDataset (https://github.com/huggingface/transformers/blob/f4323dbf8c29952b1ae55b979120969a9aeb730e/src/transformers/data/datasets/language_modeling.py#L78)? <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cu101 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: <fill in> Y - Using distributed or parallel set-up in script?: <fill in> N
07-01-2020 18:49:32
07-01-2020 18:49:32
Can you maybe just use the script provided here: https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb? In Reformer you have to be careful with the length you use for training. The docs can be helpful here as well: https://huggingface.co/transformers/model_doc/reformer.html<|||||>Thank you, this notebook is helpful. This script uses Crime and Punish as one document and padded to 2**19. So basically if I want to train a Reformer language model on a line-by-line text dataset (e.g. wikitext2), I'll need to write code to manually pad sequences instead of using run-language-modeling.py? <|||||>You have to make sure that your reformer config is correctly set up (especially the axial position encodings) according to the docs and the length of your data
transformers
5,445
closed
"Write With Transformer" inserts a space whenever accepting a suggestion, even if a space doesn't belong there
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): GPT-2 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [X] the official example scripts: Write With Transformer * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: Write With Transformer * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Type something ending in the middle of a word. (e.g. `See how a modern neural netw`) Alternatively, type something ending with an open parenthesis or quotation mark, such as `Donald Trump tweeted "`. 2. Press Tab and accept a suggestion. 3. Observe how a space is added prior to the accepted text, despite a space not belonging there. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I'm not sure about the other models, but I know GPT-2 does start continuations with a space when appropriate. So it should be possible to have it only add a space when a space is desirable. Otherwise, I think it would be better to have it not add a space at all, as it's easier to manually add a space before pushing Tab (if one is desired) than it is to go back and delete undesired spaces every time they're generated. ## Environment info N/A
07-01-2020 18:19:54
07-01-2020 18:19:54
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue still occurs. How do I reopen?
transformers
5,444
closed
Inconsistent tokenizer handling of max_len
Hi, it seems that at least the RobertaTokenizerFast is not actually truncating encodings to the max_len when encoding(the same issue occurs with the other encoding functions). The BPE tokenizer from tokenizers does. Below the problem is shown based on the 'how to train from scratch' example. ``` from tokenizers.implementations import ByteLevelBPETokenizer from tokenizers.processors import BertProcessing from transformers import RobertaTokenizerFast good_tokenizer = ByteLevelBPETokenizer( "./BERT-Esperanto/vocab.json", "./BERT-Esperanto/merges.txt", ) good_tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", good_tokenizer.token_to_id("</s>")), ("<s>", good_tokenizer.token_to_id("<s>")), ) good_tokenizer.enable_truncation(max_length=512) bad_tokenizer = RobertaTokenizerFast.from_pretrained("./BERT-Esperanto/", max_len=512) txt = "Mi estas Julien." * 1000 print( len(good_tokenizer.encode(txt).tokens), len(bad_tokenizer.encode(txt)) ) # results: 512 5002 ```
07-01-2020 17:52:01
07-01-2020 17:52:01
Yes, similarly to the `good_tokenizer` where you enabled truncation, you should enable it for the `bad_tokenizer`: ```py from transformers import RobertaTokenizerFast actually_very_good_tokenizer = RobertaTokenizerFast.from_pretrained("./BERT-Esperanto/", max_len=512) txt = "Mi estas Julien." * 1000 print( len(actually_very_good_tokenizer.encode(txt, truncation=True)) ) # 512 ``` You can check the documentation [here](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__).<|||||>Ah, gotcha my mistake, thanks for the quick response<|||||>Love the variable names here. (and the sample text) 🀣
transformers
5,443
closed
(TF) model.generate to tf.function for tf serving
# ❓ Questions & Help How can we wrap the model.generate and export it as a part of savedModel pb file? In this way, we can use beam search or topK during the tf serving or converting it to coremltools model. ## Details I am trying to find a way to wrap the model in a Keras Model. But apparently model.generate is not tf.function supported. Like for loop is not supported in tf.function. ``` from transformers import * class WrapModel(tf.keras.models.Model): def __init__(self, transformer): super(WrapModel, self).__init__() self.transformer = transformer @tf.function def _internal_generate(self, inputs): return self.transformer.generate(inputs, max_length=10, length_penalty=1.0, repetition_penalty=2.5, early_stopping=True, num_beams=3) def call(self, inputs, **kwargs): print(inputs.shape) res = self._internal_generate(inputs) return res gpt2_model = TFGPT2LMHeadModel.from_pretrained('distilgpt2') w = WrapModel(gpt2_model) input_layer = tf.keras.layers.Input(shape=10, dtype=tf.int32, name='input_ids') prediction_model = w(input_layer) tf_model = tf.keras.models.Model(inputs=input_layer, outputs=prediction_model) import coremltools as ct mlmodel = ct.convert(tf_model) ```
07-01-2020 15:59:18
07-01-2020 15:59:18
Hey @gyin-ai, Thanks a lot for the issue! Currently `generate` does not seem to be compatible with `tf.function`. I will open an issue about this and hopefully fix generate so that it will become possible to generate using `tf.function`. You're use case should definitely be possible in the future! I assume a lot of operations will have to changed in the tf generate function though, so this PR might take a while.<|||||>Will try to start on this next week: #5662<|||||>@patrickvonplaten perfect! Look forwarding to this feature so that we could use the LM model with various decoding solutions directly in the TF Serving or on-device. <|||||>Yeah, it not going to that easy :D I will be on holiday for two weeks now, but we will be starting to put the focus much more on TF soon! Also pinging @jplu here, just for notification.<|||||>Hey @gyin-ai I have done some work here https://github.com/huggingface/transformers/pull/5468 for making the models, saved model compliants. Can you try it to see if it might solve your issue? @patrickvonplaten has also started to do some great work on the LM part of TF, so yes @gyin-ai you can expect to have better TF compliancy soon ;)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> Will try to start on this next week: #5662 @patrickvonplaten Has the problem been solved<|||||>@patrickvonplaten Any update on this? Seems `model.generate` is still not compatible with `tf.funtion`.<|||||>cc @Rocketknight1 <|||||>Hi @yuwon, I'm (one of!) the current TF maintainers. We've experimented with wrapping all of `generate()` in a tf.function, but we generally find that buffers are not freed properly after each token is generated and OOM errors usually result after a few steps. `generate()` is important and so we're planning a complete investigation of this to see if there's any way we could make it work, but it's a sizeable project with a lot of other competing priorities and we don't have a concrete ETA right now.<|||||>@Rocketknight1 can you share you you've done on wrapping generate inside a tf function? It might be a start point for us to submit a PR and try to solve it.
transformers
5,442
closed
[fix] Marian tests import
07-01-2020 15:29:00
07-01-2020 15:29:00
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=h1) Report > Merging [#5442](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/13deb95a405bbd1037ad233c692d7fd1de9d31e3&el=desc) will **increase** coverage by `1.60%`. > The diff coverage is `66.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5442/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5442 +/- ## ========================================== + Coverage 76.22% 77.82% +1.60% ========================================== Files 141 141 Lines 24420 24421 +1 ========================================== + Hits 18614 19006 +392 + Misses 5806 5415 -391 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <66.66%> (+0.05%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `73.37% <0.00%> (-25.00%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+0.82%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `89.11% <0.00%> (+1.02%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.18% <0.00%> (+5.02%)` | :arrow_up: | | [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.18% <0.00%> (+9.85%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.14% <0.00%> (+29.44%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=footer). Last update [43cb03a...3f31917](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,441
closed
Benchmarking on TPU shows clearly wrong results
# πŸ› Bug ## Information I'm trying to benchmark performance of TPUs and the results don't make sense: they are the same for all batch sizes. It was mentioned [in the pull request that added the feature](https://github.com/huggingface/transformers/pull/4850#issuecomment-640751636) but the PR was merged anyway. ## To reproduce ``` from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments args = PyTorchBenchmarkArguments( models=["bert-large-uncased"], batch_sizes=[i * 1024 for i in range(2, 17)], sequence_lengths=[16], training=True, no_memory=True ) benchmark = PyTorchBenchmark(args) results = benchmark.run() ``` Output: ``` ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-large-uncased 2048 16 0.027 bert-large-uncased 3072 16 0.029 bert-large-uncased 4096 16 0.028 bert-large-uncased 5120 16 0.027 bert-large-uncased 6144 16 0.027 bert-large-uncased 7168 16 0.028 bert-large-uncased 8192 16 0.028 bert-large-uncased 9216 16 0.027 bert-large-uncased 10240 16 0.027 bert-large-uncased 11264 16 0.027 bert-large-uncased 12288 16 0.027 bert-large-uncased 13312 16 0.027 bert-large-uncased 14336 16 0.027 bert-large-uncased 15360 16 0.028 bert-large-uncased 16384 16 0.028 -------------------------------------------------------------------------------- TPU was used for inference. Note that the time after compilation stabilized (after ~10 inferences model.forward(..) calls) was measured. ==================== TRAIN - SPEED - RESULTS ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-large-uncased 2048 16 0.089 bert-large-uncased 3072 16 0.074 bert-large-uncased 4096 16 0.091 bert-large-uncased 5120 16 0.091 bert-large-uncased 6144 16 0.091 bert-large-uncased 7168 16 0.075 bert-large-uncased 8192 16 0.089 bert-large-uncased 9216 16 0.09 bert-large-uncased 10240 16 0.074 bert-large-uncased 11264 16 0.09 bert-large-uncased 12288 16 0.09 bert-large-uncased 13312 16 0.09 bert-large-uncased 14336 16 0.077 bert-large-uncased 15360 16 0.089 bert-large-uncased 16384 16 0.091 -------------------------------------------------------------------------------- TPU was used for training. Note that the time after compilation stabilized (after ~10 train loss=model.forward(...) + loss.backward() calls) was measured. ``` ## Environment info Running on GKE cluster, TPUv3-8, vanilla tpu-pytorch/xla:r1.5 image, XRT_TPU_CONFIG set ``` ==================== ENVIRONMENT INFORMATION ==================== The current process just got forked. Disabling parallelism to avoid deadlocks... To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false) - transformers_version: 3.0.0 - framework: PyTorch - use_torchscript: False - framework_version: 1.5.0a0+6d48871 - python_version: 3.6.10 - system: Linux - cpu: - architecture: 64bit - date: 2020-07-01 - time: 14:37:15.608184 - fp16: False - use_multiprocessing: False - only_pretrain_model: False - cpu_ram_mb: 30156 - use_gpu: False - use_tpu: True ```
07-01-2020 15:16:17
07-01-2020 15:16:17
That looks like some solid batch parallelization :D Yeah these results don't look very accurate. To be honest TPU Benchmarking is not very well tested yet and probably not very reliable, also partly because PyTorch/XLA is not very robust yet either. I will try to see if I can find the reason for this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,440
closed
Fix dropdown bug in searches
Version in the dropdown was getting a weird values during searches, this PR fixes it.
07-01-2020 15:01:43
07-01-2020 15:01:43
transformers
5,439
closed
Don't discard entity_group when token is the last in the sequence.
Signed-off-by: Morgan Funtowicz <[email protected]>
07-01-2020 14:59:38
07-01-2020 14:59:38
LGTM! Thanks @mfuntowicz ! Before: ```bash In [6]: nlp("My name is Wolfgang and I live in Berlin") Out[6]: [{'entity_group': 'I-PER', 'score': 0.9991481900215149, 'word': 'Wolfgang'}] ```` With this PR: ```bash In [5]: nlp("My name is Wolfgang and I live in Berlin") Out[5]: [{'entity_group': 'I-PER', 'score': 0.9991481900215149, 'word': 'Wolfgang'}, {'entity_group': 'I-LOC', 'score': 0.9983668327331543, 'word': 'Berlin'}] ```<|||||>@LysandreJik CI error seems unrelated, is it ok for you if I merge?<|||||>Did you check this, @enzoampil? Just making sure to ping you as you contributed #3957 πŸ€—<|||||>@julien-c Did a few checks as well and looks great! Was planning to include this in this PR #4987 (2nd point), but this seems to solve it cleanly already, so will consider this fix for that PR :smile: UPDATE: Ended up modifying this fix in the PR above, due to cases where the last token was repeating (for the test cases set in the above PR).
transformers
5,438
closed
Change model outputs types to self-document outputs
This PR addresses #5226 with no breaking changes. Instead of returning tuples, all PyTorch models now return a subclass of `ModelOutput` that is appropriate. Here is an example on a base model: ``` from transformers import BertTokenizer, BertForSequenceClassification import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForSequenceClassification.from_pretrained('bert-base-uncased') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model(**inputs, labels=labels) ``` Then `outputs` will be an `SequenceClassifierOutput` object, which has the returned elements as attributes. The previous syntax ``` loss, logits = outputs[:2] ``` will still work, but you can also do ``` loss = outputs.loss logits = outputs.logits ``` or also ``` loss = outputs["loss"] logits = outputs["logits"] ``` Under the hood, `outputs` is a dataclass with optional fields that may be set to `None` (like `attentions` in our example). If you index by integer or by slice, the None fields are skipped (for backward-compatibility). If you try to access an attribute that's set to None by its key (for instance here `outputs["attentions"]`), it will return an error. You can convert `outputs` to a regular tuple/dict with `outputs.to_tuple()` or `outputs.to_dict()`. You can revert to the old behavior of having tuple by setting `return_tuple=True` in the config you pass to your model, or when you instantiate your model, or when you call your model on some inputs. If you're using `torchscript` (and the config you passed to your model has `config.torchscript = True`) this will automatically be the case (because jit only handles tuples as outputs). A few other comments about the PR: - The return part of the documentation of each model is now generated from the model output. It's done by the use of the decorators `@add_code_sample_docstrings` and when the example is inside the docstring, via the decorator `@replace_return_docstrings`. In the second case, we need to know where to put the return documentaiton, so there is an empty "Return:" that is use as placeholder. - Two models were not tested (and had a bug): `XLMForTokenClassification` and `XLNetForQuestionAnsweringSimple`. This PR fixes that. - The docstrings of seq2seq generative models like Bart or T5 were wrong as far as the return was concerned. This PR naturally fixes that. - The argument `output_hidden_states` was omitted in all models forward methods, this PR adds it.
07-01-2020 14:04:09
07-01-2020 14:04:09
The old occurrences of `isinstance(item, tuple)` can be replaced by `isinstance(item, tuple) or is_dataclass(item)` (to catch the return_tuple behavior) (`is_dataclass` comes from the dataclasses module).<|||||>General question (haven't dived deeply into this PR): do we really want to maintain backward compatibility on this at "all cost"? Or should we migrate to a cleaner "real" NamedTuple or Dataclass output w/ a major version change?<|||||>> General question (haven't dived deeply into this PR): > > do we really want to maintain backward compatibility on this at "all cost"? Or should we migrate to a cleaner "real" NamedTuple or Dataclass output w/ a major version change? FWIW, one of the more frequent complaints I saw in the survey we just sent out is that we introduce breaking changes too often.<|||||>I think we want to maintain backwards compatibility with this at all cost, since not doing this would introduce a huge breaking change that will affect all users. And backwards compatibility doesn't seem too hard to keep, with @sgugger's approach.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=h1) Report > Merging [#5438](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b2747af5434e5a5d8ab1d7e2789699d20d7a4ab8&el=desc) will **decrease** coverage by `0.12%`. > The diff coverage is `95.51%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5438/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5438 +/- ## ========================================== - Coverage 77.94% 77.81% -0.13% ========================================== Files 145 146 +1 Lines 25368 25939 +571 ========================================== + Hits 19773 20185 +412 - Misses 5595 5754 +159 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <0.00%> (-0.35%)` | :arrow_down: | | [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `24.10% <45.45%> (+1.99%)` | :arrow_up: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.71% <75.51%> (-1.95%)` | :arrow_down: | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `87.70% <76.66%> (-0.61%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.90% <82.35%> (-1.25%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.65% <83.33%> (-0.80%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.16% <91.42%> (+0.14%)` | :arrow_up: | | ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=footer). Last update [b2747af...6b5f49b](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This is great! Very clean<|||||>Added a few TFBert models so tagging @jplu. There seems to be issues coming with the compilation of models and I had to add a hacky shape property to `ModelOutput` to make some tests pass (one still mysteriously fails for electra). In general is changing the output type a bad idea for TF models or is it worth pursuing this?<|||||>This is a great work!!! Unfortunately changing the output type is a bad idea for TF models as you said :( in TF each output must be a tensor or a dict of tensors. Mostly for saved models, as simple example you can run this small script: ``` import tensorflow as tf from transformers import TFBertModel, BertTokenizer, BertConfig model = TFBertModel.from_pretrained('bert-base-multilingual-uncased') tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-uncased") features = tokenizer.encode_plus("Hello world.", add_special_tokens=True, return_tensors="tf") model._saved_model_inputs_spec = None model._set_save_spec(dict(features)) tf.saved_model.save(model, "save/test") ``` You will get: ``` TypeError: To be compatible with tf.contrib.eager.defun, Python functions must return zero or more Tensors; in compilation of <function trace_model_call.<locals>._wrapped_model at 0x7efd20582b90>, found return value of type <class 'transformers.modeling_tf_outputs.TFEncoderOutputWithPooling'>, which is not a Tensor. ``` I propose that you remove the TF parts and we will take the time later to check that together? Sorry :(<|||||>To fix the failing test pass, you can add the following code: ```python if "__name__" not in frame.f_globals: return traceit ``` before this line: https://github.com/huggingface/transformers/blob/fa5423b1695cd24856bcff47214172e0f540d924/src/transformers/benchmark/benchmark_utils.py#L389 I checked and the functionality is not broken because of it. It just means that for lines in which the code cannot find nested modules to trace it jumps out of the recursion directly, similar to what is coded here: https://github.com/huggingface/transformers/blob/fa5423b1695cd24856bcff47214172e0f540d924/src/transformers/benchmark/benchmark_utils.py#L391 So IMO, this is actually how the code should be written in benchmark tracing and not a dirty fix. Also cc @thomwolf here since he originally added the code.<|||||>FYI, I've listed followups that need to happen in [this project](https://github.com/huggingface/transformers/projects/20) (will tackle them but since I'm going off next week, want to be sure I don't forget anything ;-) ).<|||||>Very excited about this!
transformers
5,437
closed
"Write With Transformer" not generating text (502 Bad Gateway)
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): (Distil-)GPT2 on WriteWithTransformer Language I am using the model on (English, Chinese ...): English The problem arises when using: * [X] the official example scripts: Write With Transformer * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: Not sure what the task is called but it's WWT * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Attempt to use the autocomplete on Write With Transformer 2. Notice it appears to be loading forever 3. Open the browser console, go to the Network tab, and try again 4. Observe the "502 Bad Gateway" error ``` HTTP/1.1 502 Bad Gateway Server: nginx/1.14.2 Date: Wed, 01 Jul 2020 12:14:42 GMT Content-Type: text/html Content-Length: 173 Connection: keep-alive X-JeanClaude: True Access-Control-Allow-Headers: Content-Type <html> <head><title>502 Bad Gateway</title></head> <body bgcolor="white"> <center><h1>502 Bad Gateway</h1></center> <hr><center>nginx/1.14.2</center> </body> </html> ``` ## Expected behavior The autocomplete should appear as normal. ## Environment info You'd know that better than I do.
07-01-2020 12:18:08
07-01-2020 12:18:08
@LysandreJik is rebooting the Raspberry Pi right now<|||||>Joking, I mean the Hugging Face data center<|||||>It's back up!<|||||>Thanks! It does work now, but it seems slower to respond and sometimes it times out. This is to the point where it's close to unusable. Do you happen to know whether this is on my end or yours?<|||||>Yes, there seems to be an issue. I'm fixing restarting the server.<|||||>So I guess that would be why I just started getting 502 Bad Gateway again? :) Thanks for the help btw.<|||||>Everything should be back to normal now. Thanks for letting us know!<|||||>Yep, I was using it earlier and it appears so. Glad I could help!
transformers
5,436
closed
Squad2 processor error
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [x] the official example scripts: (give details below) question answering * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) squadv2 * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. install the branch by @patrickvonplaten which adds the reformer for QA in #5433 2. run the examples script with squadv2 option enabled and squadv2 dataset, downloaded from official website <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> <pre><code> multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/data/processors/squad.py", line 199, in squad_convert_example_to_features cls_index = span["input_ids"].index(tokenizer.cls_token_id) ValueError: None is not in list """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "run_squad.py", line 821, in <module> main() File "run_squad.py", line 763, in main train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False) File "run_squad.py", line 449, in load_and_cache_examples features, dataset = squad_convert_examples_to_features( File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/data/processors/squad.py", line 330, in squad_convert_examples_to_features features = list( File "/home/a-ware/.local/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 420, in <genexpr> return (item for chunk in result for item in chunk) File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 868, in next raise value ValueError: None is not in list </code></pre> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: linux - Python version: 3.8 - PyTorch version (GPU?): 1.4 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
07-01-2020 12:12:29
07-01-2020 12:12:29
You have to add a "[CLS]" token to Reformer tokenizer here to make the script work. The one tokenizer that is online for reformer: `tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment")` does not have a CLS token. If you add a new token via: `tok.add_special_tokens` then you will also have to add a new weight to the word embedding of the models. Alternatively, you could also just set the cls token to some other token that exists ```python tok.cls_token = tok.eos_token ```<|||||>But overall - not sure at all whether fine-tuning the few pretrained reformer models that we have will work well for QA.<|||||>Thanks a lot, I forgot to check the tokens. I will train an QA model for testing purposes only. If it's working correctly within my application I will train an MLM model on pg-19 dataset<|||||>The cls_eos token does not exist too Now the error message is "ValueError: 2 is not in list" Do you know which token exists ?<|||||>not really sure what you mean by `cls_eos token`. If you use this tokenizer: `tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment")`, a simple hack to make the tokenizer work is for example to set its <PAD> token as its <CLS> token: ```python tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment") tok.cls_token = tok.pad_token # => now use this tokenizer in your script. ```<|||||>Ups, sorry I meant <code>tok.cls_token = tok.eos_token</code> was just in different things with my brain thinking and my hands typing Edit: It`s working now, let`s see which results we get<|||||>Are you fine-tuning the reformer-crime-and-punish model? Would be very surprised if this gives good results :D But very keen for updates :-) <|||||>At the moment I am playing with the hyperparameters. Of course, I will share my results with you. But before I need to get the trio of my dataset, the nlp library and the trainingsscript work :D<|||||>I'm trying fine-tuning Reformer on Squad2 dataset from pre-trained model "google/crime-and-punishment". Using tok.cls_token = tok.pad_token, I have the following error: ![image](https://user-images.githubusercontent.com/75449189/196391419-7387c255-cc07-4f81-975d-da544fef95c4.png) So I add tok.pad_token = tok.eos_token, but I have a new error: 2 is not in list. can someone help me? Thank you <|||||>@FrancescoTroiano Hi, have you fixed the issue? I have a same problem here I add ``` tokenizer.cls_token = tokenizer.pad_token ``` but got ValueError: 50257 is not in list
transformers
5,435
closed
I want to load pre-trained model from file instead of file name
Thanks for your excellent code. I recently encountered such a probelm, I want to load pretrained model from anthor machine, and this server could not map the path to my code. But I could load this model in a buffer. So I want to use this buffer becoming the args. What should I do ? tks.
07-01-2020 11:52:34
07-01-2020 11:52:34
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,434
closed
MiniLM transformers inconsistent log posteriors in multiple runs
# πŸ› Bug ## Information **Describe the bug** Using MiniLM for computing log likelihood of test sentences. Cross posted [here](https://github.com/microsoft/unilm/issues/196) The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (attached below) **To Reproduce** Steps to reproduce the behavior: 1. pip install transformers==2.11.0, torch==1.5.0 2. Run the scripts pasted below, `hugging-face-bug-report.py` 3. Compare results across `gpt2`, `distilgpt2`, `microsoft/MiniLM-L12-H384-uncased`, `microsoft/DialoGPT-small` **Expected behavior** Log posteriors should not be different across multiple runs of the same model. Example run with `gpt2` [consistent] `python hugging-face-bug-report.py -m gpt2` ``` Starting gpt2 on cpu [if available] -19.95 Hello, my dog is cute -20.09 Hello, your dog is cute -25.92 Nothing is what everything isn't ``` `python hugging-face-bug-report.py -m gpt2` ``` Starting gpt2 on cpu [if available] -19.95 Hello, my dog is cute -20.09 Hello, your dog is cute -25.92 Nothing is what everything isn't ``` Example run with `microsoft/DialoGPT-small` [consistent] `python hugging-face-bug-report.py -m microsoft/DialoGPT-small` ``` Starting microsoft/DialoGPT-small on cpu [if available] -37.22 Hello, my dog is cute -31.38 Hello, your dog is cute -31.30 Nothing is what everything isn't ``` `python hugging-face-bug-report.py -m microsoft/DialoGPT-small` ``` Starting microsoft/DialoGPT-small on cpu [if available] -37.22 Hello, my dog is cute -31.38 Hello, your dog is cute -31.30 Nothing is what everything isn't ``` **BUT** Example run with `microsoft/MiniLM-L12-H384-uncased` [**inconsistent**] `python hugging-face-bug-report.py -m microsoft/MiniLM-L12-H384-uncased` ``` Starting microsoft/MiniLM-L12-H384-uncased on cpu [if available] -82.84 Hello, my dog is cute -81.92 Hello, your dog is cute -90.66 Nothing is what everything isn't ``` `python hugging-face-bug-report.py -m microsoft/MiniLM-L12-H384-uncased` ``` Starting microsoft/MiniLM-L12-H384-uncased on cpu [if available] -78.01 Hello, my dog is cute -75.90 Hello, your dog is cute -83.02 Nothing is what everything isn't ``` - `transformers` version: 2.11.0 - Platform: macOS - Python version: Python 3.6.10 :: Anaconda, Inc. - PyTorch version (GPU?): 1.5.0 , No GPU - Tensorflow version (GPU?): NA - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Script ``` # hugging-face-bug-report.py #!/usr/bin/env python3 import torch import argparse from transformers import AutoTokenizer, AutoModelWithLMHead LABEL_FIELD_DICT = {'gpt2': 'labels', 'distilgpt2': 'labels', 'microsoft/MiniLM-L12-H384-uncased': 'lm_labels', 'microsoft/DialoGPT-small': 'labels'} class LM(object): def __init__(self, model_name='gpt2', device='cpu'): print('Starting {} on {} [if available]'.format(model_name, device)) self.model_name = model_name self.device = torch.device(device if torch.cuda.is_available() else 'cpu') self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelWithLMHead.from_pretrained(model_name).to(self.device) def prepare_batch(self, texts): tokenized_input = [] for text in texts: text_ids = self.tokenizer.encode(text, add_special_tokens=True) tokenized_input.append(text_ids) lens = list(map(len, tokenized_input)) maxlen = max(lens) for i, t in enumerate(tokenized_input): tokenized_input[i] += [self.tokenizer.unk_token_id] * (maxlen - len(t)) return torch.tensor(tokenized_input), torch.tensor(lens) def score(self, texts): with torch.no_grad(): tensor_input, lens = self.prepare_batch(texts) mask = torch.arange(tensor_input.size(1))[None, :] < lens[:, None] labels = tensor_input.clone().detach() labels[~mask] = -100 params = list(map(lambda x: x.to(self.device), [tensor_input, mask, labels])) inputs = {'input_ids': params[0], 'attention_mask': params[1], LABEL_FIELD_DICT[self.model_name]: params[2]} outputs = self.model(**inputs) loss, logits = outputs[:2] log_posteriors = torch.log(torch.nn.Softmax(dim=2)(logits)) results = [] total_lp = 0.0 for i, text in enumerate(texts): ids = tensor_input[i, :] lp = log_posteriors[i, :, :] sum_lp = 0.0 for j, k in enumerate(ids.tolist()[1:]): if j + 1 >= lens[i]: break sum_lp += lp[j, k] results.append((text, sum_lp)) total_lp += sum_lp total_lp_alternative = -loss * torch.sum(lens - 1) assert(torch.isclose(total_lp_alternative, total_lp)), \ "{:.3f} β‰  {:.3f}".format(total_lp_alternative, total_lp) return results def get_available_devices(): return ['cpu'] + ['cuda:{}'.format(idx) for idx in range(torch.cuda.device_count())] if __name__ == "__main__": model_choices = list(LABEL_FIELD_DICT.keys()) parser = argparse.ArgumentParser('Runninig LM on sample text from CLI') parser.add_argument('-m', '--model', help='model type', default='gpt2', choices=model_choices) parser.add_argument('-d', '--device', help='device', default='cpu', choices=get_available_devices()) args = parser.parse_args() lm = LM(model_name=args.model, device=args.device) test_inputs = ["Hello, my dog is cute", "Hello, your dog is cute", "Nothing is what everything isn't"] results = lm.score(test_inputs) for text, score in results: print(f"{score.item():.2f} {text}") ```
07-01-2020 10:39:32
07-01-2020 10:39:32
What was the issue?<|||||>MiniLM is not distilled with Masked LM task, only [Self-Attention distillation](https://github.com/huggingface/transformers/tree/master/model_cards/microsoft/MiniLM-L12-H384-uncased). It doesn't have LM head in the weights file. They are initialised randomly at each run πŸ€— ``` {'missing_keys': ['cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.predictions.decoder.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'], 'unexpected_keys': [], 'error_msgs': []} ```
transformers
5,433
closed
[Reformer] Add QA head to reformer model
This PR adds `ReformerForQuestionAnswering`. At the moment there are no pretrained weights for Reformer QA, so that no example is added. Checked all tests including RUN_SLOW on GPU => all pass.
07-01-2020 10:36:29
07-01-2020 10:36:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=h1) Report > Merging [#5433](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **increase** coverage by `0.56%`. > The diff coverage is `65.71%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5433/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5433 +/- ## ========================================== + Coverage 77.69% 78.25% +0.56% ========================================== Files 140 140 Lines 24334 24368 +34 ========================================== + Hits 18906 19070 +164 + Misses 5428 5298 -130 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.11% <64.70%> (-1.12%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `74.41% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.68% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: | | ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=footer). Last update [87716a6...e892adb](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,432
closed
Create model card
Create model card for electra-base-discriminator fine-tuned on SQUAD v1.1
07-01-2020 10:30:58
07-01-2020 10:30:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=h1) Report > Merging [#5432](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d60d231ea497aa2ed46226f51e360b207a79682e&el=desc) will **increase** coverage by `0.23%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5432/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5432 +/- ## ========================================== + Coverage 77.61% 77.84% +0.23% ========================================== Files 140 140 Lines 24343 24343 ========================================== + Hits 18893 18951 +58 + Misses 5450 5392 -58 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+2.51%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=footer). Last update [d60d231...c047d80](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,431
closed
Can't load to predict a reproduced DistilBERT
How to load and predict a fine tuning DistilBert Multi Classification Model?
07-01-2020 10:20:57
07-01-2020 10:20:57
I have tested reproducing **[Fine Tuning Transformer for MultiClass Text Classification](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)** successfully. But I tried to load the model and vocab files from a spirited file predict_distilbert.ipynb as below: `# Importing libraries` `import pandas as pd` `import torch` `import transformers` `import numpy as np` `from torch.utils.data import Dataset, DataLoader` `from transformers import DistilBertModel, DistilBertTokenizer` `test_string = "The temperature, relative humidity and wind information shown above are the respective forecasts over a 24-hour period."` `tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased','models/vocab_distilbert_news.bin')` `load_model = DistilBertModel.from_pretrained('distilbert-base-cased','models/pytorch_distilbert_news.bin')` Then I got "TypeError ---> Traceback (most recent call last)" `<ipython-input-17-274a96b92c04> in <module>` `----> 1 tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased','models/vocab_distilbert_news.bin')` `2 load_model = DistilBertModel.from_pretrained('distilbert-base-cased','models/pytorch_distilbert_news.bin')` `~/anaconda3/envs/hgface/lib/python3.7/site-packages/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs)` `--> 911 return cls._from_pretrained(*inputs, **kwargs)` `912 913 @classmethod` `~/anaconda3/envs/hgface/lib/python3.7/site-packages/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)` `1060 # Instantiate tokenizer.` `1061 try:` `-> 1062 tokenizer = cls(*init_inputs, **init_kwargs)` `1063 except OSError:` `1064 raise OSError(` `TypeError: __init__() got multiple values for argument 'vocab_file'` Please help! <|||||># Solved this problem in [Fine tuning DistilBERT model OSError: Unable to load weights from pytorch checkpoint file. #4](https://github.com/abhimishra91/transformers-tutorials/issues/4)
transformers
5,430
closed
Create model card
Create model card for electra-small-discriminator finetuned on SQUAD v1.1
07-01-2020 09:53:30
07-01-2020 09:53:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=h1) Report > Merging [#5430](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d60d231ea497aa2ed46226f51e360b207a79682e&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5430/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5430 +/- ## ========================================== - Coverage 77.61% 77.60% -0.01% ========================================== Files 140 140 Lines 24343 24343 ========================================== - Hits 18893 18892 -1 - Misses 5450 5451 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=footer). Last update [d60d231...e3436bf](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,429
closed
QA Pipelines fixes
**1. Some newly introduced models such as [bart-large-finetuned-squadv1](https://huggingface.co/valhalla/bart-large-finetuned-squadv1) have more than 2 outputs by default on the QA pipeline which is not supported.** - This PR makes it possible to support such outputs and assumes the 2 first elements are the actual `start`, `end `logits. **2. Minor refactoring of the decoding strategy:** - Actually mask the padding & question **before** applying the softmax to extract answer - Use the stabilized version of the `softmax `in log-space
07-01-2020 09:40:53
07-01-2020 09:40:53
Maybe we should fix this upstream. I wanted to keep identical behavior for `squad_convert_examples_to_features` while moving the code to the new tokenizer API but maybe I missed something.<|||||>@thomwolf I removed the commit on the padding part to make sure things continue to work at very short term. Also, after looking at the code, I've the feeling it requires quite a bit of refactoring that might live in its own PR, so I prefer isolate the few changes here and the padding stuff.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=h1) Report > Merging [#5429](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9a473f1e43221348334b9e7f95bb45770b7ef268&el=desc) will **decrease** coverage by `1.08%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5429/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5429 +/- ## ========================================== - Coverage 77.85% 76.77% -1.09% ========================================== Files 138 138 Lines 24314 24314 ========================================== - Hits 18930 18667 -263 - Misses 5384 5647 +263 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.31% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.09% <0.00%> (-0.44%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=footer). Last update [9a473f1...55e2f90](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,428
closed
How to use (and preferably finetune) BART for text infilling?
[Here](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration) is shown how to use BART for simple mask filling (one <mask> token = one generated token), but how to use it for text infilling? The BART paper states that the model was pretrained on such task so it should be possible. Is the only solution to simply take the `facebook/bart-large` model for summarization and finetune it on a dataset with <mask> tokens or is there a better way?
07-01-2020 09:27:05
07-01-2020 09:27:05
@julien-c , @sshleifer ?<|||||>Sorry for the slow response. Unfortunately, text infilling is not yet supported. It would be a welcome contribution! I think the equivalent fairseq task is called `DenoisingTask` https://github.com/pytorch/fairseq/blob/aa79bb9c37b27e3f84e7a4e182175d3b50a79041/fairseq/tasks/denoising.py#L27<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,427
closed
WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss.
# πŸ› Bug ## Information model I am using (Bert, XLNet ...): Bert language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ip1 = Input(shape = (max_length+2,), dtype="int32") ip2 = Input(shape = (max_length+2,), dtype="int32") ip3 = Input(shape = (max_length+2,), dtype="int32") Bert_model = TFBertModel.from_pretrained('bert-base-uncased') ip = Bert_model(ip1, attention_mask=ip2, token_type_ids=ip3)[0][:,1:-1,:] out = Bidirectional(LSTM(units=768))(ip) out = Dense(384, activation='relu')(out) out = Dropout(0.2)(out) out = Dense(units=9, activation="softmax")(out) model = Model([ip1, ip2, ip3], out) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce When use Model.fit gives following warnings WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss. Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform:windows 10 - Python version:3.7 - PyTorch version (GPU?): - Tensorflow version (GPU?):2.1.0 - Using GPU in script?:NO- Using distributed or parallel set-up in script?:NO
07-01-2020 09:07:02
07-01-2020 09:07:02
I am also encountering a similar issue from yesterday. It never happened before. ``` WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss. ```<|||||>This https://github.com/huggingface/transformers/issues/5421#issuecomment-652626787 may be useful.<|||||>Closed by mistake<|||||>Hello everyone, I am fine-tuning a BERT model from huggingface transformers for Named Entity Recognition Task in tensorflow. The input to the model is a single word and output is a tag of that word. I have created a custom generator function (data_generator) from where I am getting data while training. I have freezed the bert layer in training mode and added some layers on top of it to predict the tag of the given word. **The code is this :** ```python from tensorflow.keras.layers import Input, Dense, Activation, Dropout, LSTM, GlobalMaxPool1D from tensorflow.keras.models import Model from tensorflow.keras.utils import to_categorical from tensorflow.keras.preprocessing.sequence import pad_sequences from transformers import BertTokenizer, TFBertModel, BertConfig ##Load the BERT tokenizer. print('Loading BERT tokenizer...') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) bert = 'bert-base-uncased' config = BertConfig(dropout=0.2, attention_dropout=0.2) config.output_hidden_states = False transformer_model = TFBertModel.from_pretrained(bert, config = config) input_ids_in = Input(shape=(max_len,), name='input_token', dtype='int32') input_masks_in = Input(shape=(max_len,), name='masked_token', dtype='int32') embedding_layer = transformer_model(input_ids_in, attention_mask=input_masks_in)[0] X = LSTM(50, return_sequences=True)(embedding_layer) X = GlobalMaxPool1D()(X) X = Dense(50, activation='relu')(X) X = Dropout(0.2)(X) X = Dense(num_labels, activation='softmax')(X) model = Model(inputs=[input_ids_in, input_masks_in], outputs = X) for layer in model.layers[:3]: layer.trainable = False model.compile(loss='categorical_crossentropy', optimizer='adam') train_gen = data_generator(sentences, tags, tag2ix, max_len, number_sent_per_batch) model.fit(train_gen, epochs=1, steps_per_epoch=steps, verbose=1) ``` **The error I am getting is this :** ```python ValueError: in user code: /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function * outputs = self.distribute_strategy.run( /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run ** return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:541 train_step ** self.trainable_variables) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1804 _minimize trainable_variables)) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:521 _aggregate_gradients filtered_grads_and_vars = _filter_grads(grads_and_vars) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1219 _filter_grads ([v.name for _, v in grads_and_vars],)) ValueError: No gradients provided for any variable: ['lstm_2/lstm_cell_2/kernel:0', 'lstm_2/lstm_cell_2/recurrent_kernel:0', 'lstm_2/lstm_cell_2/bias:0', 'dense_8/kernel:0', 'dense_8/bias:0', 'dense_9/kernel:0', 'dense_9/bias:0']. ``` I have gone through many links like : <https://github.com/tensorflow/tensorflow/issues/1511> <https://github.com/tensorflow/tensorflow/issues/27949> <https://github.com/huggingface/transformers/issues/5421> and many more. There are many solutions provided in these github issues but couldn't find the solution of my error. I have even posted on stackoverflow (<https://stackoverflow.com/questions/62863374/valueerror-no-gradients-provided-for-any-variable-in-tensorflow-2-2-0>) but couldn't find the solution. If someone can point the mistake, it would be of great help. Thanks in advance! Tensorflow Version : 2.2.0<|||||>πŸ‘€<|||||>ιœ€θ¦ε†»η»“δΈ€ιƒ¨εˆ†ε‚ζ•°οΌŒ ζˆ‘ηš„η†θ§£ζ˜―bertζ¨‘εž‹θ‡ͺεΈ¦ηš„εˆ©η”¨CLSεšηš„εˆ†η±»ζ¨‘εž‹οΌŒζ²‘ζœ‰δ½Ώη”¨ζœ€εŽηš„ζ—Άε€™θΏ˜δΌ ε…₯δΊ†gradientθΏ›θ‘Œζ¨‘εž‹ηš„ζ’―εΊ¦ζ›΄ζ–°οΌŒζ‰€δ»₯会ζŠ₯ι”™γ€‚θ¦ε†»η»“ι‚£ιƒ¨εˆ†ε‚ζ•°οΌοΌοΌ
transformers
5,426
closed
[Reformer] Add Masked LM Reformer
Similar to BERT, Reformer LM model is split into two: - The standard Causal Language Modeling Reformer `ReformerModelWithLMHead`: Here we have a tiny breaking change as `ReformerModelWithLMHead` can no longer be used with bi-directional self-attention. This option should not really have been used anyways as there are no pretrained weights - A masked language model Reformer `ReformerForMaskedLM`. Here a colab notebook showcasing how to use Reformer for MLM: https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing Checked all tests including RUN_SLOW on GPU => all pass.
07-01-2020 08:30:53
07-01-2020 08:30:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=h1) Report > Merging [#5426](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/35befd9ce31c23a774fd34f57bc44033ce70141d&el=desc) will **increase** coverage by `0.29%`. > The diff coverage is `96.15%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5426/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5426 +/- ## ========================================== + Coverage 77.57% 77.86% +0.29% ========================================== Files 141 140 -1 Lines 24581 24368 -213 ========================================== - Hits 19068 18974 -94 + Misses 5513 5394 -119 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `89.45% <96.00%> (+1.34%)` | :arrow_up: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `74.41% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `78.26% <0.00%> (-7.46%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `76.84% <0.00%> (-0.71%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (-0.50%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.31% <0.00%> (-0.48%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (-0.39%)` | :arrow_down: | | ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=footer). Last update [35befd9...4e52c6b](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Yeah, I'm only adding one assert, which forces `ReformerLMHead` to not have bi-directional attention, but I doubt anybody has used this yet anyways<|||||>When I make the sequences shorter but increase the batch size an zero division error returns. Do I need to take care about something specific ?<|||||>> When I make the sequences shorter but increase the batch size an zero division error returns. > Do I need to take care about something specific ? Hey @flozi00, would be great if you can open an issue with environment info and code so that I can reproduce :-)
transformers
5,425
closed
[Quick poll] Give your opinion on the future of πŸ€— transformers
The πŸ€— transformers library is at a crossroad 🚏 and could evolve in many directions, from teaching to research & applications. We made a quick poll to get your opinion. If you have 2-3 minutes and want to participate in shaping the future of the library πŸ‘‰ https://docs.google.com/forms/d/e/1FAIpQLSeKWNE1SyaSvqLYxWQxTA_XeRCVm3_ohmr3UXJgpIxzZhSXlg/viewform (please reply in the above feedback form rather than to this thread)
07-01-2020 08:07:59
07-01-2020 08:07:59
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,424
closed
Bart EncoderLayer masked_fill not working properly with pytorch 1.4
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Bart I'm trying to use the EncoderLayer of Bart but I realized that `attn_weights = attn_weights.masked_fill(reshaped, float("-inf"))` at line [659](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L659) does not work when `reshaped` is an `int` or `float` tensor and throws the following error: ``` attn_weights = attn_weights.masked_fill(reshaped, float("-inf")) RuntimeError: Expected object of scalar type Bool but got scalar type Float for argument #2 'mask' in call to _th_masked_fill_bool_ ``` However it does not raise an error when I change `reshaped` type to bool but in that case it returns a tensor of `nan` values. With @patrickvonplaten helps, we realized that it was related to a pytorch version because upgrading my torch version from 1.4 to 1.5 solved the problem ## To reproduce ``` from transformers.modeling_bart import EncoderLayer from transformers import BartConfig import torch hidden_states = torch.tensor(3 * [ 7 * [ 1024 * [0.4]]]) attn_mask = torch.ones(hidden_states.shape[:2]) layer = EncoderLayer(BartConfig()) layer(hidden_states.transpose(0, 1), attn_mask) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0 - Python version: 3.6 - PyTorch version (GPU?):1.4 - Using GPU in script?: no - Using distributed or parallel set-up in script?:no
07-01-2020 07:54:12
07-01-2020 07:54:12
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hello, I still hava this problem when my pytorch were upgraded to 1.5 . I don't know if it's related to the python version . Can you give me some suggestions ? Thank you so much! Information ` File "/home/ynos/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/functional.py", line 3937, in multi_head_attention_forward float('-inf'), RuntimeError: Expected object of scalar type Bool but got scalar type Long for argument #2 'mask' in call to _th_masked_fill_bool_ ` Environment info: - Python version: 3.6.2 - PyTorch version (GPU): 1.5
transformers
5,423
closed
Error Instantiating T5-11B from conributed models
# πŸ› Bug ## Information Model I am using : T5-11B Language I am using the model on: English The problem arises when using: when I try downloading the T5-11B model The tasks I am working on is: Evaluating ROGUE score on CNN dataset ## To reproduce Steps to reproduce the behavior: Just try instantiating the T5-11B model using the AutoModel Class Error Message: OSError: Can't load weights for 't5-11b'. Make sure that: - 't5-11b' is a correct model identifier listed on 'https://huggingface.co/models' - or 't5-11b' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. ## Expected behavior Would instatntiate the ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-5.3.0-61-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
07-01-2020 03:18:48
07-01-2020 03:18:48
same result. I can't download<|||||>Please see https://github.com/huggingface/transformers/issues/5986#issuecomment-663090043<|||||>Works when I use: ```python import transformers t5 = transformers.AutoModel.from_pretrained('t5-11b', use_cdn = False) ``` Thank You!
transformers
5,422
closed
Create README.md
Card for my model
07-01-2020 01:41:21
07-01-2020 01:41:21
Cool<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=h1) Report > Merging [#5422](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fcf0652460753f8a81f7576e8abdaa6b3742f00e&el=desc) will **decrease** coverage by `0.41%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5422/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5422 +/- ## ========================================== - Coverage 76.69% 76.28% -0.42% ========================================== Files 140 140 Lines 24343 24343 ========================================== - Hits 18671 18570 -101 - Misses 5672 5773 +101 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.92% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.31% <0.00%> (+0.18%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (+1.32%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+13.07%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=footer). Last update [fcf0652...a0ee0ae](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,421
closed
What to do about this warning message: "Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification"
``` model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") ``` returns this warning message: ``` Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` This just started popping up with v.3 so I'm not sure what is the recommended action to take here. Please advise if you can. Basically, any of my code using the `AutoModelFor<X>` is throwing up this warning now. Thanks.
07-01-2020 01:31:55
07-01-2020 01:31:55
Not sure what's happening with the multiple duplicate opened issues, @ohmeow? Is GitHub flaky again? :)<|||||>I am also encountering the same warning. When loading the model ``` Some weights of the model checkpoint at bert-base-uncased were not used when initializing TFBertModel: ['nsp___cls', 'mlm___cls'] - This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). All the weights of TFBertModel were initialized from the model checkpoint at bert-base-uncased. If your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFBertModel for predictions without further training. ``` When attempting to fine tune it: ``` WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss. ``` Is the model correctly fine-tuning? Are the pre-trained model weights also getting updated (fine-tuned) or only the layers outside(above) the pre-trained model are changing their weights while training? <|||||>> Not sure what's happening with the multiple duplicate opened issues, @ohmeow? > > Is GitHub flaky again? :) I noticed the same thing. Not sure what is going on ... but I swear I only opened this one :)<|||||>@ohmeow you're loading the `bert-base-cased` checkpoint (which is a checkpoint that was trained using a similar architecture to `BertForPreTraining`) in a `BertForSequenceClassification` model. This means that: - The layers that `BertForPreTraining` has, but `BertForSequenceClassification` does not have will be discarded - The layers that `BertForSequenceClassification` has but `BertForPreTraining` does not have will be randomly initialized. This is expected, and tells you that you won't have good performance with your `BertForSequenceClassification` model before you fine-tune it :slightly_smiling_face:. @fliptrail this warning means that during your training, you're not using the `pooler` in order to compute the loss. I don't know how you're finetuning your model, but if you're not using the pooler layer then there's no need to worry about that warning.<|||||>@LysandreJik Thank you for your response. I am using the code: ``` def main_model(): encoder = ppd.TFBertModel.from_pretrained("bert-base-uncased") input_ids = tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32) token_type_ids = tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32) attention_mask = tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32) embedding = encoder(input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask)[0] pooling = tf.keras.layers.GlobalAveragePooling1D()(embedding) normalization = tf.keras.layers.BatchNormalization()(pooling) dropout = tf.keras.layers.Dropout(0.1)(normalization) out = tf.keras.layers.Dense(1, activation="sigmoid", name="final_output_bert")(dropout) model = tf.keras.Model(inputs=[input_ids, token_type_ids, attention_mask], outputs=out) loss = tf.keras.losses.BinaryCrossentropy(from_logits=True) optimizer = tf.keras.optimizers.Adam(lr=2e-5) metrics=['accuracy', tf.keras.metrics.FalseNegatives(), tf.keras.metrics.FalsePositives()] model.compile(optimizer=optimizer, loss=loss, metrics=metrics) return model model = main_model() model.summary() ``` I am only using the `TFBertModel.from_pretrained("bert-base-uncased")` pre-built class. I am not initializing it from any other class. Still, I am encountering the warning. From what I can understand this should only appear when initializing given pre-trained model inside another class. Am I fine-tuning correctly? Are the BERT layer weights also getting updated? Warning while loading model: ``` Some weights of the model checkpoint at bert-base-uncased were not used when initializing TFBertModel: ['nsp___cls', 'mlm___cls'] - This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). All the weights of TFBertModel were initialized from the model checkpoint at bert-base-uncased. If your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFBertModel for predictions without further training. ``` While attempting to train: ``` WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss. ``` This warning only started to appear from yesterday in all my codes and other sample codes given.<|||||>Hello everyone, I also start getting this error today. before today it was working fine. Are there any changes that take place in colab? This is the code I am using: !pip install transformers import TensorFlow as to import transformers from transformers import TFBertForSequenceClassification, BertConfig tokenizer = transformers.BertTokenizer('gdrive/My Drive/Colab Notebooks/vocab.txt', do_lower_case=True) max_seq_length = 128 bert = 'bert-large-uncased' config = BertConfig.from_pretrained('bert-large-uncased', output_hidden_states=True, hidden_dropout_prob=0.2, attention_probs_dropout_prob=0.2) transformer_model = TFBertForSequenceClassification.from_pretrained(bert, config=config) input_ids_in = tf.keras.layers.Input(shape=(max_seq_length,), name='input_token', dtype='int32') input_masks_in = tf.keras.layers.Input(shape=(max_seq_length,), name='masked_token', dtype='int32') input_segments_in = tf.keras.layers.Input(shape=(max_seq_length,), name='segment_ids', dtype='int32') embedding_layer = transformer_model(input_ids_in, attention_mask=input_masks_in, token_type_ids=input_segments_in) I have been using this same code for more than 2 weeks and no problem till yesterday. Please if anyone finds the solution, share it. Thank you<|||||>Thanks @LysandreJik > This is expected, and tells you that you won't have good performance with your BertForSequenceClassification model before you fine-tune it Makes sense. Now, how do we know what checkpoints are available that ***were*** trained on `BertForSequenceClassification`?<|||||>@fliptrail in your code you have the following: ```py embedding = encoder(input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask)[0] ``` which means you're only getting the first output of the model, and using that to compute the loss. The first output of the model is the hidden states: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_bert.py#L716-L738 ``` Returns: :obj:`tuple(tf.Tensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs: last_hidden_state (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the model. pooler_output (:obj:`tf.Tensor` of shape :obj:`(batch_size, hidden_size)`): Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during Bert pretraining. This output is usually *not* a good summary of the semantic content of the input, you're often better with averaging or pooling the sequence of hidden-states for the whole input sequence. hidden_states (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`: Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. """ ``` You're ignoring the second value which is the pooler output. The warnings are normal in your case.<|||||>@VaibhavBhatnagar17, these are warnings, not errors. What exact warning are you not understanding?<|||||>@ohmeow that really depends on what you want to do! Sequence classification is a large subject, with many different tasks. [Here's](https://huggingface.co/models/?filter=text-classification) a list of all available checkpoints fine-tuned on sequence classification (not all are for BERT, though!) Please be aware that if you have a specific task in mind, you should fine-tune your model to that task.<|||||>@LysandreJik Hey, What I am not able to understand is that I was using this code for more than 2 weeks and no warning came up till yesterday. I haven't changed anything but suddenly this warning came up is confusing. I am not getting the same output dimension as before and not able to complete my project. <|||||>The warning came up yesterday because version 3.0.0 was released yesterday. It's weird that you saw an output dimension changed since yesterday. What's the error you get?<|||||>I see this same warning when initializing `BertForMaskedLM`, pasted in below for good measure. As other posters have mentioned, this warning began appearing only after upgrading to v3.0.0. ``` Some weights of the model checkpoint at bert-large-uncased-whole-word-masking were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForMaskedLM were not initialized from the model checkpoint at bert-large-uncased-whole-word-masking and are newly initialized: ['cls.predictions.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` Note that my module imports/initializations essentially duplicate the snippet demonstrating cloze task usage at https://huggingface.co/bert-large-uncased-whole-word-masking?text=Paris+is+the+%5BMASK%5D+of+France. ``` from transformers import BertTokenizer, BertForMaskedLM _tokenizer = BertTokenizer.from_pretrained( 'bert-large-uncased-whole-word-masking') _model = BertForMaskedLM.from_pretrained( 'bert-large-uncased-whole-word-masking') ``` Am I correct in assuming that nothing has changed in the behavior of the relevant model, but that perhaps this warning should have been being printed all along?<|||||>You're right, this has always been the behavior of the models. It wasn't clear enough before, so we've clarified it with this warning.<|||||>Thanks, @LysandreJik .<|||||>Anyone knows how to suppress this warning? I am aware that the model needs fine-tuning and I am fine-tuning it so, it becomes annoying to see this over and over again.<|||||>You can manage the warnings with the `logging` utility introduced in version 3.1.0: ```py from transformers import logging logging.set_verbosity_warning() ```<|||||>@LysandreJik Thanks for the rapid response, I set it with set_verbosity_error() <|||||>@LysandreJik - So , by default bert-base-uncased loading from ```TFBertModel``` has ```199``` variables ```[ 3embedding + 2 layer norms + (16 x 12 layers) + 2 (pooler kernel and bias )] ```. But when loading from ```TFBertForMaskedLM```, it has ```204``` variables. Below are the 5 extra variables ``` tf_bert_for_masked_lm_1/mlm___cls/predictions/bias:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/kernel:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/bias:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/gamma:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/beta:0 ``` So that means , these 5 variables are randomly initialising right. Are these 5 variables required for MLM ( is this how it is in official tensorflow models ) OR can we take output token embeddings ( before passing to mlm___cls ) ```( batch x sequence x embedding_dimension ) ```, multiply it with ```word_embedding matrix``` to produce ```( batch x sequence x vocab_size ) ``` and then use that for MLM loss . <|||||>@LysandreJik I'm having a slightly different issue here - I'm loading a sequence classification checkpoint in a `AutoModelForSequenceClassification` model. But I still get the warning. Here's my code: ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained('roberta-large-mnli') ``` Output: ``` Some weights of the model checkpoint at roberta-large-mnli were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). ``` I believe it's NOT expected because I'm indeed initializing from a model that I expect to be exactly identical. I'm only starting to get this warning after upgrading to transformers v3 as well. I'm using 3.3.1 currently. Could you please help? Thanks! <|||||>@s4sarath I'm not sure I understand your question. @veronica320, the pooler layer is not used when doing sequence classification, so there's nothing to be worried about. The pooler is the second output of the `RobertaModel`: https://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/modeling_roberta.py#L691 But only the first output is used in the sequence classification model: https://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/modeling_roberta.py#L1002<|||||>Thanks a lot!<|||||>@LysandreJik - Sorry to make you confused . ``` tf_bert_for_masked_lm_1/mlm___cls/predictions/bias:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/kernel:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/bias:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/gamma:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/beta:0 ``` The above 4 variables are randomly initialising right, means they were not a part of official BERT . Am i right?<|||||>Thank you for your explanation. Actually these four variables shouldn't be initialized randomly, as they're part of BERT. The official BERT checkpoints contain two heads: the MLM head and the NSP head. You can see it here: ```py >>> from transformers import TFBertForMaskedLM >>> model = TFBertForMaskedLM.from_pretrained("bert-base-cased") ``` Among the logging, you should find this: ``` Some layers from the model checkpoint at bert-base-cased were not used when initializing TFBertForMaskedLM: ['nsp___cls'] - This IS expected if you are initializing TFBertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing TFBertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). All the layers of TFBertForMaskedLM were initialized from the model checkpoint at bert-base-cased. ``` This tells you two things: - Some layers of the checkpoints are not used. These are `['nsp___cls']`, corresponding to the CLS head. Since we're using a `***ForMaskedLM`, it makes sense not to use the CLS head - All the layers of the model were initialized from the model checkpoint, as both the transformer layers and the MLM head were present in the checkpoint. If you're getting those variables randomly initialized: ``` tf_bert_for_masked_lm_1/mlm___cls/predictions/bias:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/kernel:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/bias:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/gamma:0 tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/beta:0 ``` then it means you're using a checkpoint that does not contain these variables. These are the MLM layers, so you're probably loading a checkpoint that was saved using an architecture that does not contain these layers. This can happen if you do the following: ```py >>> from transformers import TFBertModel, TFBertForMaskedLM >>> model = TFBertModel.from_pretrained("bert-base-cased") >>> model.save_pretrained(directory) >>> mlm_model = TFBertForMaskedLM.from_pretrained(directory) ``` I hope this answers your question!<|||||>Oh okay. Thank you so much for the clarification. When I looked at bert models from tf-hub , these 4 variables were not present. That was the reason for the confusion . On Tue, Oct 27, 2020, 7:02 PM Lysandre Debut <[email protected]> wrote: > Thank you for your explanation. > > Actually these four variables shouldn't be initialized randomly, as > they're part of BERT. The official BERT checkpoints contain two heads: the > MLM head and the NSP head. > > You can see it here: > > >>> from transformers import TFBertForMaskedLM>>> model = TFBertForMaskedLM.from_pretrained("bert-base-cased") > > Among the logging, you should find this: > > Some layers from the model checkpoint at bert-base-cased were not used when initializing TFBertForMaskedLM: ['nsp___cls'] > - This IS expected if you are initializing TFBertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). > - This IS NOT expected if you are initializing TFBertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). > All the layers of TFBertForMaskedLM were initialized from the model checkpoint at bert-base-cased. > > This tells you two things: > > - Some layers of the checkpoints are not used. These are ['nsp___cls'], > corresponding to the CLS head. Since we're using a ***ForMaskedLM, it > makes sense not to use the CLS head > - All the layers of the model were initialized from the model > checkpoint, as both the transformer layers and the MLM head were present in > the checkpoint. > > If you're getting those variables randomly initialized: > > tf_bert_for_masked_lm_1/mlm___cls/predictions/bias:0 > tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/kernel:0 > tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/bias:0 > tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/gamma:0 > tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/beta:0 > > then it means you're using a checkpoint that does not contain these > variables. These are the MLM layers, so you're probably loading a > checkpoint that was saved using an architecture that does not contain these > layers. This can happen if you do the following: > > >>> from transformers import TFBertModel, TFBertForMaskedLM>>> model = TFBertModel.from_pretrained("bert-base-cased")>>> model.save_pretrained(directory)>>> mlm_model = TFBertForMaskedLM.from_pretrained(directory) > > I hope this answers your question! > > β€” > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/5421#issuecomment-717245807>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ACRE6KEEQACWSAEO3GK3CL3SM3DYNANCNFSM4OM5S2SQ> > . > <|||||>Hi @LysandreJik . I had a look at the official BERT repo . There are only 199 variables in the official model checkpoints. Which means, of 204 variables ( last 5 variables for MLM layer ) is initialised randomly. These variables are not a part of official checkpoints I think. <|||||>> @ohmeow you're loading the `bert-base-cased` checkpoint (which is a checkpoint that was trained using a similar architecture to `BertForPreTraining`) in a `BertForSequenceClassification` model. > > This means that: > > * The layers that `BertForPreTraining` has, but `BertForSequenceClassification` does not have will be discarded > * The layers that `BertForSequenceClassification` has but `BertForPreTraining` does not have will be randomly initialized. > > This is expected, and tells you that you won't have good performance with your `BertForSequenceClassification` model before you fine-tune it πŸ™‚. > > @fliptrail this warning means that during your training, you're not using the `pooler` in order to compute the loss. I don't know how you're finetuning your model, but if you're not using the pooler layer then there's no need to worry about that warning. Where does the random initialization of the missing parameters occur? I don't see any calls to `_init_weights`.<|||||>@rkunani - did you get answer to this? I am also facing the same issue....<|||||>@PremalMatalia I looked into it myself and found that the initialization of the `nn.Linear` layer on line 1469 [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py) is where the parameters are randomly initialized (see the `nn.Linear` [documentation](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html)). <|||||>There is something wrong. There is nothing to be randomly initia;ized, unless it is a new layer out of architecture. On Sun, Apr 4, 2021 at 5:02 AM Raguvir Kunani ***@***.***> wrote: > @PremalMatalia <https://github.com/PremalMatalia> I looked into it myself > and found that the initialization of the nn.Linear layer on line 1469 here > <https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py> > is where the parameters are randomly initialized (see the nn.Linear > documentation > <https://pytorch.org/docs/stable/generated/torch.nn.Linear.html>). > > β€” > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/5421#issuecomment-812940724>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ACRE6KHJ4RZMPNOT6KWU7HTTG6QPDANCNFSM4OM5S2SQ> > . > <|||||>Hi, is there any solution? I have a same problem. the warning as below: ``` Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMultiLabelSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] - This IS expected if you are initializing BertForMultiLabelSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing BertForMultiLabelSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForMultiLabelSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` **the learner still can fit and predict, but the prediction is not consistent every time**<|||||>I don't know brother. I really can't understand those warnings Because it doesn't make sense. Check github.com/legacyai/tf-tranaformers . A new and improved version is on the way. On Tue, Apr 13, 2021, 2:48 PM TingNLP ***@***.***> wrote: > Hi, is there any solution? > I have a same problem. > #339 <https://github.com/huggingface/transformers/issues/339> #18 > <https://github.com/huggingface/transformers/pull/18> #132 > <https://github.com/huggingface/transformers/issues/132> > the warning as below: > > Some weights of the model checkpoint at bert-base-uncased were not used > when initializing BertForMultiLabelSequenceClassification: > ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', > 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', > 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', > 'cls.predictions.transform.LayerNorm.weight', > 'cls.predictions.transform.LayerNorm.bias'] > > - This IS expected if you are initializing > BertForMultiLabelSequenceClassification from the checkpoint of a model > trained on another task or with another architecture (e.g. initializing a > BertForSequenceClassification model from a BertForPretraining model). > - This IS NOT expected if you are initializing > BertForMultiLabelSequenceClassification from the checkpoint of a model that > you expect to be exactly identical (initializing a > BertForSequenceClassification model from a BertForSequenceClassification > model). > Some weights of BertForMultiLabelSequenceClassification were not > initialized from the model checkpoint at bert-base-uncased and are newly > initialized: ['classifier.weight', 'classifier.bias'] > You should probably TRAIN this model on a down-stream task to be able > to use it for predictions and inference. > > β€” > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/5421#issuecomment-818586586>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ACRE6KEATEJTCCLXBKPFVZDTIQD7ZANCNFSM4OM5S2SQ> > . > <|||||>All of the `BertForXXX` models consist of a BERT [model](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) followed by some head which is task-specific. For sequence classification tasks, the head is just a linear layer which maps the BERT transformer hidden state vector to a vector of length `num_labels`, where `num_labels` is the number of classes for your classification task (for example, positive/negative sentiment analysis has 2 labels). If you're familiar with logits, this final vector contains the logits. In the `transformers` source code, you can see this linear layer (assigned to `self.classifier`) initialized in the [constructor](https://huggingface.co/transformers/_modules/transformers/models/bert/modeling_bert.html#BertForSequenceClassification) for `BertForSequenceClassification`: ``` class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() ``` Since `self.classifier` is not part of the pre-trained BERT model, its parameters must be initialized randomly (done automatically by the `nn.Linear` constructor). @s4sarath Anytime you use code like `model = BertForSequenceClassification.from_pretrained("bert-base-cased")`, the `self.classifier` linear layer will have to be initialized randomly. @TingNLP You are getting different predictions each time because each time you instantiate the model using `.from_pretrained()`, the `self.classifier` parameters will be different.<|||||>Absolutely agree . Task specific heads has to be randomly initialised. Because, it is not a part of official Bert Model. I agree with that. On Tue, Apr 13, 2021, 5:49 PM Raguvir Kunani ***@***.***> wrote: > All of the BertForXXX models consist of a BERT model > <https://huggingface.co/transformers/model_doc/bert.html#bertmodel> > followed by some head which is task-specific. For sequence classification > tasks, the head is just a linear layer which maps the BERT transformer > hidden state vector to a vector of length num_labels, where num_labels is > the number of classes for your classification task (for example, > positive/negative sentiment analysis has 2 labels). If you're familiar with > logits, this final vector contains the logits. > > In the transformers source code, you can see this linear layer (assigned > to self.classifier) initialized in the constructor > <https://huggingface.co/transformers/_modules/transformers/models/bert/modeling_bert.html#BertForSequenceClassification> > for BertForSequenceClassification: > > class BertForSequenceClassification(BertPreTrainedModel): > def __init__(self, config): > super().__init__(config) > self.num_labels = config.num_labels > > self.bert = BertModel(config) > self.dropout = nn.Dropout(config.hidden_dropout_prob) > self.classifier = nn.Linear(config.hidden_size, config.num_labels) > > self.init_weights() > > Since self.classifier is not part of the pre-trained BERT model, its > parameters must be initialized randomly (done automatically by the > nn.Linear constructor). > > @s4sarath <https://github.com/s4sarath> Anytime you use code like model = > BertForSequenceClassification.from_pretrained("bert-base-cased"), the > self.classifier linear layer will have to be initialized randomly. > > @TingNLP <https://github.com/TingNLP> You are getting different > predictions each time because each time you instantiate the model using > .from_pretrained(), the self.classifier parameters will be different. > > β€” > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/5421#issuecomment-818690286>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ACRE6KH7VKR7VKJFIDZC33LTIQZDXANCNFSM4OM5S2SQ> > . > <|||||>OK... So... the problem is the parameters. Is it possible for us to fix the value? I think if it can be fixed, the prediction will not be inconsistent every time.<|||||>there is no point doing that right. because once the model is trained we will be having fixed set of parameters . :) On Wed, Apr 14, 2021, 10:45 AM TingNLP ***@***.***> wrote: > OK... So... the problem is the parameters. > Is it possible for us to fix the value? > > I think if it can be fixed, the prediction will not be inconsistent every > time. > > β€” > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/5421#issuecomment-819233571>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ACRE6KGVO6PVJZVFHV2TOBTTIUQHNANCNFSM4OM5S2SQ> > . > <|||||>@s4sarath Thanks for your immediate reply I am still a little confused. If the prediction is different each time, is that still a reasonable result??<|||||>I will explain bro. Assume classification. Last classification layer is initialised randomly right now. Now, it's okay, because you haven't trained it yet. But once you train the model and save the checkpoint, at the time of inference you are loading that checkpoint. So the prediction remains consistent. On Wed, Apr 14, 2021, 12:11 PM TingNLP ***@***.***> wrote: > @s4sarath <https://github.com/s4sarath> Thanks for your immediate reply > I am still a little confused. > If the prediction is different each time, is that still a reasonable > result?? > > β€” > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/5421#issuecomment-819270472>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ACRE6KESSFYPPKW2A5AYEG3TIU2JPANCNFSM4OM5S2SQ> > . > <|||||>It is said that BERT is a pre-trained model. Why then, it is needed to be trained again?<|||||>It does not need to be trained again to be used for a task that it was trained on: e.g., masked language modeling over a very large, general corpus of books and web text in the case of BERT. However, to perform more specific tasks like classification and question answering, such a model must be re-trained, which is called _fine-tuning_. Since many popular tasks fall in this latter category, it is assumed that most developers will be fine-tuning the models, and hence the developers of Huggingface included this warning message to ensure developers are aware when the model does not appear to have been fine-tuned. See **Advantages of Fine-Tuning** at this tutorial: https://mccormickml.com/2019/07/22/BERT-fine-tuning/#12-installing-the-hugging-face-library Or check out this page from the documentation: https://huggingface.co/transformers/training.html<|||||>Thank you. Now it is a bit more clear. I am using finBERT for sentiment analysis, and downloaded the model from the official finBERT GIT. Do I need, then, to train the model anew?<|||||>I am facing a similar error while creating an entity extraction model using bert-base-uncased. Here is the code for my model ``` import config import torch import transformers import torch.nn as nn def loss_fn(output, target, mask, num_labels): lfn = nn.CrossEntropyLoss() active_loss = mask.view(-1) == 1 active_logits = output.view(-1, num_labels) active_labels = torch.where( active_loss, target.view(-1), torch.tensor(lfn.ignore_index).type_as(target) ) loss = lfn(active_logits, active_labels) return loss class EntityModel(nn.Module): def __init__(self, num_tag, num_pos): super(EntityModel, self).__init__() self.num_tag = num_tag self.num_pos = num_pos self.bert = transformers.BertModel.from_pretrained(config.BASE_MODEL_PATH) self.bert_drop_1 = nn.Dropout(p = 0.3) self.bert_drop_2 = nn.Dropout(p = 0.3) self.out_tag = nn.Linear(768, self.num_tag) self.out_pos = nn.Linear(768, self.num_pos) def forward(self, ids, mask, token_type_ids, target_pos, target_tag): o1, _ = self.bert(ids, attention_mask = mask, token_type_ids = token_type_ids) bo_tag = self.bert_drop_1(o1) bo_pos = self.bert_drop_2(o1) tag = self.out_tag(bo_tag) pos = self.out_pos(bo_pos) loss_tag = loss_fn(tag, target_tag, mask, self.num_tag) loss_pos = loss_fn(pos, target_pos, mask, self.num_pos) loss = (loss_tag + loss_pos) / 2 return tag, pos, loss ``` **Error** Some weights of the model checkpoint at D:\Transformers\bert-entity-extraction\input\bert-base-uncased_L-12_H-768_A-12 were not used when initializing BertModel: ['cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.bias'] - This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). How to reslove this? <|||||>> @veronica320, the pooler layer is not used when doing sequence classification, so there's nothing to be worried about. Note that this warning is sensitive to a Transformers version used for model training vs. a version used for inference. For instance, the Roberta model finetuned with 4.9.1 expresses this warning when loading the model for `RobertaForSequenceClassification` inference based on ver. 4.15.0, but the model finetuned with 4.15.0 does not. <|||||>An interesting edge case -- when I created and fine-tuned my custom classification model ```BertXXXSequenceClassification``` inherited from ```BertPreTrainedModel```, I found out that I can't name layers called ```self.beta_layer```. Otherwise, I get the warning that says beta_layer is newly initialised and won't be able to load its wights and bias from saved checkpoints. Didn't know what caused this conflict, and refactoring it to ```self.bate_layer``` saved me in the end. I used ver 4.15.0.<|||||>I've been using suppressing the warning with this helper: ```python from transformers import CLIPTextModel, logging class log_level: orig_log_level: int log_level: int def __init__(self, log_level: int): self.log_level = log_level self.orig_log_level = logging.get_verbosity() def __enter__(self): logging.set_verbosity(self.log_level) def __exit__(self): logging.set_verbosity(self.orig_log_level) with log_level(logging.ERROR): text_encoder: CLIPTextModel = CLIPTextModel.from_pretrained('openai/clip-vit-large-patch14') ```<|||||>Coming here from Google, this was happening when I called `AutoModel.from_pretrained("EleutherAI/gpt-neo-125M")`. I figured out that you can get the correct model type using the pipeline API instead: ![image](https://user-images.githubusercontent.com/3464445/208471014-625135b3-2cc4-4fa7-8309-e2479664eb8d.png) In this case, this means I could also use `AutoModelForCausalLM`, but not `AutoModel` as that generated a model of a different type.<|||||>For those who want to suppress the warning for the latest transformers version, try this, hope this helps :D ``` import logging logging.getLogger("transformers.modeling_utils").setLevel(logging.ERROR) ```<|||||>I guess the simple solution is to use `AutoModelForMaskedLM` instead of `AutoModel`. ```python from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained('deps/distilbert-base-uncased') ```
transformers
5,420
closed
Refactor generation sampling parameters (e.g. top k, temperature) into "Sampling" classes
#4164 has a full description of the intention here. Basically, to avoid exploding generate(...) with more arguments, I've added one generic Sampler parameter that allows for arbitrary transformations of the generation probability distribution conditioned on the past. This allows users to specify custom ways of sampling (e.g. insert a specific token after a previous one, etc.) In the process, I've added some basic tests around these samplers; existing tests pass otherwise.
06-30-2020 23:45:56
06-30-2020 23:45:56
@sshleifer thanks for taking a look. The run against the tests you mentioned (bart/t5/marian) passed when I gave them a kick. When you say performance, this approach should have the same amount of compute (each enabled Sampler runs once per generation loop) since it is just moving code around unless I missed something. Let me do a rebase and see if that CI failure goes away -- let me know if you have any other concerns! <|||||>@turtlesoupy - thanks a lot for the PR! Cool design choice! The `generate` method definitely needs a bigger refactor sooner or later and this is a cool idea on how to make it easier to add new probability distribution wrap functions. With this design I'm a bit worried that we restrict beam search too much in a sense that only the log_softmax of the "next_tokens" distribution can "wrapped" but not the summed distribution of the `next_token_scorers + beam_scores`. Here this will break the beam search + sampling case (if I understood the code correctly). I guess a method that adapts the `_beam_scores + next_token_scores` could also be used in "greedy" beam search in the future and this design choice would block us a bit. But I'm not sure whether there are many use cases one would like to adapt `_beam_scores + next_token_scores` before appling `top_k` for "greedy" beam search...what are your thoughts on this? @turtlesoupy @yjernite @sshleifer <|||||>@patrickvonplaten I'm un-opinionated since my use cases weren't using beam search; the goal of this PR was so that I could introduce a my own sampler that enforced rules without having to fork the generate function. For beam search, one approach could be to apply the warp to (`next_token_scores + beam_scores`) and then perform sampling afterwards. Then it is sampling from a consistent space and the hypothesis scores would be modified appropriately <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,419
closed
High Quality EN-DE/EN-FR Translators
Download instructions from torchub/fairseq: [here](https://github.com/pytorch/fairseq/blob/f03392d11faf1588cb571d19835d6a61ab0d9ca6/examples/wmt19/README.md#L1) the BART conversion script should be reusable. ## Open source status * [ x] the model implementation is available: (give details) * [ x] the model weights are available: (give details) * [ x] who are the authors: (mention them, if possible by @gh-username) Sergey Edunov, @myleott Michael Auli, David Grangier Paper: https://arxiv.org/pdf/1808.09381.pdf ### Spec Desired API: ```python mname = 'facebook/wmt-en-de' model = FairseqTranslator.from_pretrained(mname) tokenizer = FairseqBPETokenizer.from_pretrained(mname) # AutoTokenizer should also work batch = tokenizer.prepare_seq2seq_batch(['Maschinelles Lernen ist großartig!']) translated = model.generate(**batch) # determine assert tokenizer.batch_decode(translated)[0] == 'Machine Learning is great' ``` - add .rst docs, (see adding a new model instructions, but don't follow them too religiously if something seems suboptimal). - check timing, memory vs fairseq. - if lots of modeling code is added, common tests should pass. ### Steps 1. Get tokenizer equivalence (The fairseq object should have an encode method, and there should be wgettable links of fairseq to get the relevant tokenizer files). 1b. Upload tokenizer to s3 so your tokenizer tests work on CI. You can work out of the `stas/fairseq-en-de` namespace on your modelhub account and then move everything over (or not) at the end. 2. Get model.forward/ "logits" equivalence (ignore differences less than 1e-6). This usually doesn't work the first time and you have to go line by line with two ipdb sessions (one fairseq, one hf) until you can find the line that's different. At this stage you should worry very little about code quality and just try to get integration tests passing. 3. Get model.generate/ "translation" equivalence. There may be small beam search discrepancies. For this you will need to figure out `decoder_start_token_id`, `num_beams`, and other config settings. 4. Upload Everything to S3. 5. Go through [template](https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/README.md#typical-workflow-for-including-a-model) and make sure most of the reasonable things are done. At this point a full integration test (as above) should pass. 6. Check memory, time and BLEU against fairseq (ideally in collab). Improve/document results in PR description. 7. test the scary parts: special tokens, padding insensitivity. 8. Docs/AutoConfig Etc. Helpful: https://huggingface.co/transformers/model_sharing.html Assigned to: @stas00
06-30-2020 23:41:39
06-30-2020 23:41:39
Excuse me. Will this model be added in the future, how long will it take? Is currently only T5 and Bart can do machine translation?<|||||>I would guess that I get around to this by the end of July, but I can't be sure. We also have `MarianMTModel` and 1000+ pretrained weights from `Helsinki-NLP/` that do translation. Here is the list: https://huggingface.co/Helsinki-NLP <|||||>I will work on this one. <|||||>Here is a lazy man's implementation that uses a simple proxy to the fairseq implementation and makes the spec test pass: ``` import torch class FairseqProxy(): def __init__(self, module): self.module = module @classmethod def from_pretrained(cls, mname): return cls(module=torch.hub.load('pytorch/fairseq', mname, checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', tokenizer='moses', bpe='fastbpe')) class FairseqTranslator(FairseqProxy): def generate(self, **tokenized_sentences): return self.module.generate(tokenized_sentences['data']) class FairseqBPETokenizer(FairseqProxy): def prepare_seq2seq_batch(self, sentences): # encode return {'data': [self.module.encode(sentence) for sentence in sentences]} def batch_decode(self, batched_hypos): return [self.module.decode(hypos[0]['tokens']) for hypos in batched_hypos] ``` ``` # Look ma, I cheated and the test passes ;) mname = 'transformer.wmt19.ru-en' model = FairseqTranslator.from_pretrained(mname) tokenizer = FairseqBPETokenizer.from_pretrained(mname) batch = tokenizer.prepare_seq2seq_batch(["МашинноС ΠΎΠ±ΡƒΡ‡Π΅Π½ΠΈΠ΅ - это Π·Π΄ΠΎΡ€ΠΎΠ²ΠΎ!"]) translated = model.generate(**batch) assert tokenizer.batch_decode(translated)[0] == 'Machine learning is great!' ``` Now to the real work of porting...<|||||>mostly done: https://github.com/huggingface/transformers/pull/6940<|||||>once https://github.com/huggingface/transformers/pull/6940 is merged this issue is to be closed<|||||>FYI, Linked Pull requests automatically close the linked issue.<|||||>I noticed that you already did the linking after leaving the comment, but decided to leave it as the previous comment of mine wasn't certain ;)
transformers
5,418
closed
Bans SentencePiece 0.1.92
SentencePiece 0.1.92 seems to cause Segmentation Fault, as visible [here](https://github.com/huggingface/transformers/issues/4857).
06-30-2020 22:52:37
06-30-2020 22:52:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=h1) Report > Merging [#5418](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **increase** coverage by `0.17%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5418/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5418 +/- ## ========================================== + Coverage 77.69% 77.87% +0.17% ========================================== Files 140 140 Lines 24334 24334 ========================================== + Hits 18906 18949 +43 + Misses 5428 5385 -43 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (ΓΈ)` | | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (+8.92%)` | :arrow_up: | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=footer). Last update [87716a6...5aa01fe](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,417
closed
Clean up diffs in Trainer/TFTrainer
This PR does a bit of cleanup in the two Trainer and tries ti make the diff in the two TrainingArguments as minimal as possible. - `set_seed` is now just one function in trainer_utils: the problem was that even if you only use TF and import it from transformers right now, it does not set seed for tf **and** will fail on PyTorch stuff. - `eval_steps` is now a common argument for both versions of Trainer - as discussed, `n_gpu` in `TFTrainingArguments` becomes `n_replicas`. This is a breaking change, I can add the deprecation warnings that goes with it if you think it's necessary.
06-30-2020 22:50:29
06-30-2020 22:50:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=h1) Report > Merging [#5417](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/64e3d966b1131c15b5905b1e1e582d4bebac1ef0&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `65.11%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5417/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5417 +/- ## ======================================= Coverage 77.75% 77.75% ======================================= Files 140 140 Lines 24373 24392 +19 ======================================= + Hits 18951 18967 +16 - Misses 5422 5425 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.53% <33.33%> (-0.45%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <40.00%> (-0.85%)` | :arrow_down: | | [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `47.45% <44.44%> (-3.71%)` | :arrow_down: | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <100.00%> (ΓΈ)` | | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <100.00%> (+7.45%)` | :arrow_up: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <100.00%> (+0.46%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.18% <0.00%> (+0.50%)` | :arrow_up: | | ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=footer). Last update [64e3d96...c185e2f](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Pretty cool πŸ”₯
transformers
5,416
closed
Refactor generation sampling parameters (e.g. top k, temperature) into "Sampling" classes
#4164 has a full description of the intention here. Basically, to avoid exploding `generate(...)` with more arguments, I've added one generic `Sampler` parameter that allows for arbitrary transformations of the generation probability distribution conditioned on the past. This allows users to specify custom ways of sampling (e.g. insert a specific token after a previous one, etc.) In the process, I've added some basic tests around these samplers; existing tests pass otherwise.
06-30-2020 22:30:10
06-30-2020 22:30:10
(Replaced merge with rebase -- see #5420)
transformers
5,415
closed
Gradient checkpointing BERT & ALBERT poc
Proof of concept for gradient checkpointing in PyTorch, using a model-agnostic approach. The POC is done for BERT and ALBERT. Pros: - Model agnostic, only a few lines to add to models to be able to use this functionality - Reinforces the model layer API, adding `get_layers()` (name to be discussed) alongside `get_input_embeddings()` and `get_output_embeddings()` Cons: - The checkpoint API can only handle positional arguments, pytorch tensors or None only. This means that: - The `output_hidden_states` must be cast to a tensor in the model - Models that pass keyword arguments to their layers need to pass positional arguments (see GPT-2 for example, which uses keyword arguments [here](https://github.com/huggingface/transformers/blob/b45e65efa0fbff2611ddd68e14fa75cacef3fe08/src/transformers/modeling_gpt2.py#L488-L493)). If you think this is a cool API, I'll go ahead and implement this for the remaining models. @patrickvonplaten @thomwolf @julien-c @sgugger @ibeltagy Here are the results using the benchmarking script: ```py from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig args = PyTorchBenchmarkArguments(models=["bert-base-cased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512], no_inference=True, training=True) config_base = BertConfig.from_pretrained("bert-base-cased", gradient_checkpointing=False) benchmark = PyTorchBenchmark(args, configs=[config_base]) benchmark.run() ``` Result (only relevant info): ``` ==================== TRAIN - SPEED - RESULTS ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-cased 8 8 0.028 bert-base-cased 8 32 0.029 bert-base-cased 8 128 0.072 bert-base-cased 8 512 0.296 -------------------------------------------------------------------------------- ==================== TRAIN - MEMORY - RESULTS ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base-cased 8 8 2419 bert-base-cased 8 32 2481 bert-base-cased 8 128 2985 bert-base-cased 8 512 8233 -------------------------------------------------------------------------------- ``` ```py from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig args = PyTorchBenchmarkArguments(models=["bert-base-cased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512], no_inference=True, training=True) config_base = BertConfig.from_pretrained("bert-base-cased", gradient_checkpointing=True) benchmark = PyTorchBenchmark(args, configs=[config_base]) benchmark.run() ``` Result (only relevant info): ``` ==================== TRAIN - SPEED - RESULTS ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-cased 8 8 0.049 bert-base-cased 8 32 0.05 bert-base-cased 8 128 0.109 bert-base-cased 8 512 0.473 -------------------------------------------------------------------------------- ==================== TRAIN - MEMORY - RESULTS ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base-cased 8 8 2385 bert-base-cased 8 32 2403 bert-base-cased 8 128 2465 bert-base-cased 8 512 3969 -------------------------------------------------------------------------------- ```
06-30-2020 21:43:06
06-30-2020 21:43:06
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=h1) Report > Merging [#5415](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5415/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5415 +/- ## ========================================== + Coverage 77.69% 77.71% +0.01% ========================================== Files 140 140 Lines 24334 24343 +9 ========================================== + Hits 18906 18917 +11 + Misses 5428 5426 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.71% <100.00%> (-0.69%)` | :arrow_down: | | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `80.86% <100.00%> (+0.12%)` | :arrow_up: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.46% <100.00%> (+0.80%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.12% <100.00%> (+0.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.68% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: | | ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=footer). Last update [87716a6...b7e417a](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I really like the API, I think it's fine if we enforce all attention layers to use positional arguments and wrap the output attentions bool into a tensor. Can we test how much memory is saved here for `bert-base-uncased` layers 6 - 18 for example? Should be quite easy to do now with the benchmark utils.<|||||>Benchmarked with the script and updated the PR @patrickvonplaten!<|||||>Awesome, it looks we can gain quite a lot of memory :-)<|||||>@LysandreJik , another problem is that `torch.utils.checkpoint.checkpoint` expects the function to return a tuple of `Variable`s. This won't work with forward functions that return other types as in [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L321).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@LysandreJik , any plans to resurrect this? <|||||>Yes it's on my TODO (probably in ~2 weeks), and will be for most models (with some exceptions like BART and T5, which need a lot of plumbing to work with this POC)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,414
closed
Fix roberta model ordering for TFAutoModel
Given that `RobertaConfig` inherits from `BertConfig`, the previous ordering was causing bert models to be wrongfully selected by `TFAutoModel...` in place of roberta ones when instantiated with roberta models (checked the others configs too, it seems it was the only one with such a problem).
06-30-2020 21:36:21
06-30-2020 21:36:21
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=h1) Report > Merging [#5414](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b45e65efa0fbff2611ddd68e14fa75cacef3fe08&el=desc) will **decrease** coverage by `0.59%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5414/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5414 +/- ## ========================================== - Coverage 78.27% 77.67% -0.60% ========================================== Files 140 140 Lines 24334 24334 ========================================== - Hits 19047 18902 -145 - Misses 5287 5432 +145 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `72.50% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (ΓΈ)` | | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=footer). Last update [b45e65e...c3229c4](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>If the order is now consistent with `modeling_auto.py`, LGTM<|||||>@julien-c What do you mean by consistent exactly? Exact same ordering or same final behavior? (yes for the latter, no for the former for now).<|||||>@Pierrci both
transformers
5,413
closed
[mobilebert] Avoid F.tanh deprecation warning
06-30-2020 20:35:33
06-30-2020 20:35:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=h1) Report > Merging [#5413](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac611145926ff63ee6d6cbd0b28c19bacb6f7ea1&el=desc) will **increase** coverage by `0.44%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5413/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5413 +/- ## ========================================== + Coverage 77.42% 77.87% +0.44% ========================================== Files 140 140 Lines 24334 24334 ========================================== + Hits 18841 18949 +108 + Misses 5493 5385 -108 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `88.90% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=footer). Last update [ac61114...0845ecd](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,412
closed
[GH Runner] fix yaml indent
06-30-2020 20:16:47
06-30-2020 20:16:47
transformers
5,411
closed
Add TFBartForConditionalGeneration
- adds `TFBartForConditionalGeneration`, which can generate summaries that are equivalent to pytorch. #### TODO this PR: - [x] fast tests besides two - [x] reasonable xsum generations - [x] tests passing - [x] fix slow cnn test (tf needs to call `adjust_logits_during_generation`) - [x] functional dropout - [x] simplify torch and tf caching logic - [x] docs - [x] upload applicable tf/h5 weights. #### Future PRs: - [ ] blender/pegasus/mBART/marian etc. - [ ] #7814
06-30-2020 19:52:10
06-30-2020 19:52:10
Awesome! This PR will leverage many pretrained weights and make them available for TF! I don't really think there is a workaround for [supporting multiple input types](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_t5.py#L946) especially to make it compatible with Keras at the moment. There was a discussion on Slack about it (also cc @jplu ). Also, did you check that the model works in tf graph mode (corresponds to this test: https://github.com/huggingface/transformers/blob/316206c11466c9a4019a376843581bf519422369/tests/test_modeling_tf_common.py#L128 which is about to be added in another PR).<|||||>Sounds like I should wait until you start/for other changes to work more on this @jplu ? Would be really good IMO if whatever XLA magic we use decides whether the functions should take tuples or dicts. I much prefer either to both. <|||||>IMHO yes, and this will let you more time to polish your code :) After this is only mine, if everybody else prefer to have it merged I will not go against ^^ but I think that for now the more we add models, the more we add issues, and then the longer and harder it will be to fix everything. I'm in favor to use only positional arguments and dicts, but this should be discussed with everybody, to see what they think about it.<|||||>Is this still blocked @jplu ?<|||||>Try to rebase + make the changes to pass the tests. And it should be ok :)<|||||>Thanks for the review @LysandreJik ! + mBART, Pegasus and Blenderbot, and Marian will be in the next PR. (this is too big already for me to hold in my tiny brain). + Your 4 bullets: Will do!
transformers
5,410
closed
[cleanup] TF T5 tests only init t5-base once.
06-30-2020 19:50:41
06-30-2020 19:50:41
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=h1) Report > Merging [#5410](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/991172922f9711d7bef160d6aedb2ed1059a88ff&el=desc) will **decrease** coverage by `0.02%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5410/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5410 +/- ## ========================================== - Coverage 77.89% 77.87% -0.03% ========================================== Files 141 140 -1 Lines 24634 24334 -300 ========================================== - Hits 19189 18949 -240 + Misses 5445 5385 -60 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.43% <100.00%> (ΓΈ)` | | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `78.26% <0.00%> (-7.46%)` | :arrow_down: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (-4.11%)` | :arrow_down: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (-1.37%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `76.84% <0.00%> (-0.71%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.31% <0.00%> (-0.69%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (-0.39%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.74% <0.00%> (-0.28%)` | :arrow_down: | | ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=footer). Last update [9911729...e4ce37c](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>CI is broken for other reasons.
transformers
5,409
closed
[CI] gh runner doesn't use -v, cats new result
This should reduce amount of scrolling required to find error.
06-30-2020 19:41:46
06-30-2020 19:41:46
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=h1) Report > Merging [#5409](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/27a7fe7a8d3e58d1df7ecc4c5390ac7be728724f&el=desc) will **increase** coverage by `0.23%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5409/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5409 +/- ## ========================================== + Coverage 77.63% 77.87% +0.23% ========================================== Files 140 140 Lines 24334 24334 ========================================== + Hits 18892 18949 +57 + Misses 5442 5385 -57 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.93% <0.00%> (+0.19%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.18% <0.00%> (+2.01%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=footer). Last update [27a7fe7...dee5b20](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>merging, will fix if it breaks.
transformers
5,408
closed
Fix examples titles and optimization doc page
This PR addresses two things: - first, some of the titles were a bit messy in the navigation bar in the examples and optimization page, fixed that - second, it expands the optimization documentation, adding mentions of which classes/functions go with which backend (since there is no TF prefix) and expand existing docstrings or add them if missing.
06-30-2020 19:37:37
06-30-2020 19:37:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=h1) Report > Merging [#5408](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **increase** coverage by `0.21%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5408/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5408 +/- ## ========================================== + Coverage 77.69% 77.90% +0.21% ========================================== Files 140 140 Lines 24334 24336 +2 ========================================== + Hits 18906 18960 +54 + Misses 5428 5376 -52 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `96.05% <100.00%> (+0.05%)` | :arrow_up: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.65% <100.00%> (+0.38%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `73.37% <0.00%> (-25.00%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.18% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: | | ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=footer). Last update [87716a6...b839c40](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,407
closed
examples/seq2seq: never override $WANDB_PROJECT
cc @borisdayma
06-30-2020 19:11:30
06-30-2020 19:11:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=h1) Report > Merging [#5407](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c4d4e8bdbd25d9463d41de6398940329c89b7fb6&el=desc) will **decrease** coverage by `0.33%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5407/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5407 +/- ## ========================================== - Coverage 77.90% 77.57% -0.34% ========================================== Files 140 140 Lines 24334 24334 ========================================== - Hits 18957 18876 -81 - Misses 5377 5458 +81 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-17.81%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `84.79% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.01% <0.00%> (-5.11%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `87.67% <0.00%> (-2.29%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.02% <0.00%> (-2.18%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.30% <0.00%> (-1.54%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.67% <0.00%> (-0.51%)` | :arrow_down: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=footer). Last update [c4d4e8b...6c8eb90](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
5,406
closed
[fix] slow fill_mask test failure
- New tokenizer API does not put space between `<s/>` and sentence. - New result: "my name is John" is better than old result: "My name is" so fine to update `expected_result`. - This is caused by tokenizers upgrade.
06-30-2020 18:57:53
06-30-2020 18:57:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=h1) Report > Merging [#5406](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c4d4e8bdbd25d9463d41de6398940329c89b7fb6&el=desc) will **decrease** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5406/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #5406 +/- ## ========================================== - Coverage 77.90% 77.87% -0.03% ========================================== Files 140 140 Lines 24334 24334 ========================================== - Hits 18957 18950 -7 - Misses 5377 5384 +7 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=tree) | Coverage Ξ” | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5406/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.31% <ΓΈ> (ΓΈ)` | | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5406/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5406/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5406/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5406/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.37% <0.00%> (+25.00%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=footer). Last update [c4d4e8b...bd7b994](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>lgtm
transformers
5,405
closed
Colab session crash with XLA & Tranformers
I am trying to use xla with transformers but as soon as i import transformers after installing XLA the session is restarted. Even i tried old version of transformers but same issue, is it related to colab ? ``` !pip3 install transformers VERSION = "nightly" #@param ["1.5" , "20200325", "nightly"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version $VERSION from transformers import T5Tokenizer ```
06-30-2020 18:09:11
06-30-2020 18:09:11
Hi! Could you share a colab notebook reproducing the error?<|||||>> Hi! Could you share a colab notebook reproducing the error? Below code was sufficient to reproduce the error: ``` !pip3 install transformers VERSION = "nightly" #@param ["1.5" , "20200325", "nightly"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version $VERSION from transformers import T5Tokenizer ``` **_But now i am not able install XLA itself._** I can check they have made some changes in env-setup.py file yesterday. **Output when i installed XLA yesterday :** ``` % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4139 100 4139 0 0 36628 0 --:--:-- --:--:-- --:--:-- 36628 Updating TPU and VM. This may take around 2 minutes. Updating TPU runtime to pytorch-nightly ... Collecting cloud-tpu-client Downloading https://files.pythonhosted.org/packages/56/9f/7b1958c2886db06feb5de5b2c191096f9e619914b6c31fdf93999fdbbd8b/cloud_tpu_client-0.10-py3-none-any.whl Collecting google-api-python-client==1.8.0 Downloading https://files.pythonhosted.org/packages/9a/b4/a955f393b838bc47cbb6ae4643b9d0f90333d3b4db4dc1e819f36aad18cc/google_api_python_client-1.8.0-py3-none-any.whl (57kB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 61kB 3.1MB/s Requirement already satisfied: oauth2client in /usr/local/lib/python3.6/dist-packages (from cloud-tpu-client) (4.1.3) Requirement already satisfied: httplib2<1dev,>=0.9.2 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (0.17.4) Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (3.0.1) Requirement already satisfied: google-auth>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (1.17.2) Requirement already satisfied: google-api-core<2dev,>=1.13.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (1.16.0) Requirement already satisfied: six<2dev,>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (1.12.0) Uninstalling torch-1.5.1+cu101: Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (0.0.3) Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (0.4.8) Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (0.2.8) Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (4.6) Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.4.1->google-api-python-client==1.8.0->cloud-tpu-client) (47.3.1) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.4.1->google-api-python-client==1.8.0->cloud-tpu-client) (4.1.0) Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2018.9) Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2.23.0) Requirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (3.10.0) Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (1.52.0) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2.9) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (3.0.4) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (1.24.3) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2020.6.20) Installing collected packages: google-api-python-client, cloud-tpu-client Found existing installation: google-api-python-client 1.7.12 Uninstalling google-api-python-client-1.7.12: Successfully uninstalled google-api-python-client-1.7.12 Successfully installed cloud-tpu-client-0.10 google-api-python-client-1.8.0 Done updating TPU runtime Successfully uninstalled torch-1.5.1+cu101 Uninstalling torchvision-0.6.1+cu101: Successfully uninstalled torchvision-0.6.1+cu101 Copying gs://tpu-pytorch/wheels/torch-nightly-cp36-cp36m-linux_x86_64.whl... - [1 files][107.3 MiB/107.3 MiB] Operation completed over 1 objects/107.3 MiB. Copying gs://tpu-pytorch/wheels/torch_xla-nightly-cp36-cp36m-linux_x86_64.whl... / [1 files][230.7 MiB/230.7 MiB] Operation completed over 1 objects/230.7 MiB. Copying gs://tpu-pytorch/wheels/torchvision-nightly-cp36-cp36m-linux_x86_64.whl... / [1 files][ 1.7 MiB/ 1.7 MiB] Operation completed over 1 objects/1.7 MiB. Processing ./torch-nightly-cp36-cp36m-linux_x86_64.whl Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch==nightly) (0.16.0) Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==nightly) (1.18.5) ERROR: fastai 1.0.61 requires torchvision, which is not installed. Installing collected packages: torch Successfully installed torch-1.7.0a0+b9cca4b Processing ./torch_xla-nightly-cp36-cp36m-linux_x86_64.whl Installing collected packages: torch-xla Successfully installed torch-xla-1.6+71579ee Processing ./torchvision-nightly-cp36-cp36m-linux_x86_64.whl Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torchvision==nightly) (1.18.5) Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (from torchvision==nightly) (1.7.0a0+b9cca4b) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==nightly) (7.0.0) Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch->torchvision==nightly) (0.16.0) Installing collected packages: torchvision Successfully installed torchvision-0.8.0a0+446eac6 Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libnvidia-common-440 Use 'apt autoremove' to remove it. The following NEW packages will be installed: libomp5 0 upgraded, 1 newly installed, 0 to remove and 33 not upgraded. Need to get 234 kB of archives. After this operation, 774 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libomp5 amd64 5.0.1-1 [234 kB] Fetched 234 kB in 1s (373 kB/s) Selecting previously unselected package libomp5:amd64. (Reading database ... 144379 files and directories currently installed.) Preparing to unpack .../libomp5_5.0.1-1_amd64.deb ... Unpacking libomp5:amd64 (5.0.1-1) ... Setting up libomp5:amd64 (5.0.1-1) ... Processing triggers for libc-bin (2.27-3ubuntu1) ... /sbin/ldconfig.real: /usr/local/lib/python3.6/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link ``` **Output now:** ``` % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4139 100 4139 0 0 64671 0 --:--:-- --:--:-- --:--:-- 64671 Updating TPU and VM. This may take around 2 minutes. Updating TPU runtime to pytorch-nightly ... Collecting cloud-tpu-client Downloading https://files.pythonhosted.org/packages/56/9f/7b1958c2886db06feb5de5b2c191096f9e619914b6c31fdf93999fdbbd8b/cloud_tpu_client-0.10-py3-none-any.whl Requirement already satisfied: oauth2client in /usr/local/lib/python3.6/dist-packages (from cloud-tpu-client) (4.1.3) Collecting google-api-python-client==1.8.0 Downloading https://files.pythonhosted.org/packages/9a/b4/a955f393b838bc47cbb6ae4643b9d0f90333d3b4db4dc1e819f36aad18cc/google_api_python_client-1.8.0-py3-none-any.whl (57kB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 61kB 2.7MB/s Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (0.4.8) Requirement already satisfied: six>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (1.12.0) Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (0.2.8) Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (4.6) Requirement already satisfied: httplib2>=0.9.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (0.17.4) Requirement already satisfied: google-api-core<2dev,>=1.13.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (1.16.0) Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (3.0.1) Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (0.0.3) Requirement already satisfied: google-auth>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (1.17.2) Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (1.52.0) Requirement already satisfied: setuptools>=34.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (47.3.1) Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2018.9) Requirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (3.10.0) Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2.23.0) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.4.1->google-api-python-client==1.8.0->cloud-tpu-client) (4.1.0) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2020.6.20) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (3.0.4) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2.9) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (1.24.3) Uninstalling torch-1.5.1+cu101: Installing collected packages: google-api-python-client, cloud-tpu-client Found existing installation: google-api-python-client 1.7.12 Uninstalling google-api-python-client-1.7.12: Successfully uninstalled google-api-python-client-1.7.12 Successfully installed cloud-tpu-client-0.10 google-api-python-client-1.8.0 Done updating TPU runtime Successfully uninstalled torch-1.5.1+cu101 Uninstalling torchvision-0.6.1+cu101: Successfully uninstalled torchvision-0.6.1+cu101 Copying gs://tpu-pytorch/wheels/torch-nightly-cp36-cp36m-linux_x86_64.whl... / [1 files][ 0.0 B/ 0.0 B] Operation completed over 1 objects. Copying gs://tpu-pytorch/wheels/torch_xla-nightly-cp36-cp36m-linux_x86_64.whl... / [1 files][ 0.0 B/ 0.0 B] Operation completed over 1 objects. Copying gs://tpu-pytorch/wheels/torchvision-nightly-cp36-cp36m-linux_x86_64.whl... / [1 files][ 0.0 B/ 0.0 B] Operation completed over 1 objects. Processing ./torch-nightly-cp36-cp36m-linux_x86_64.whl ERROR: Exception: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 153, in _main status = self.run(options, args) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 382, in run resolver.resolve(requirement_set) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 201, in resolve self._resolve_one(requirement_set, req) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 365, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 313, in _get_abstract_dist_for req, self.session, self.finder, self.require_hashes File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 194, in prepare_linked_requirement progress_bar=self.progress_bar File "/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py", line 452, in unpack_url unpack_file_url(link, location, download_dir, hashes=hashes) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py", line 416, in unpack_file_url unpack_file(from_path, location, content_type) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py", line 252, in unpack_file flatten=not filename.endswith('.whl') File "/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py", line 114, in unzip_file zip = zipfile.ZipFile(zipfp, allowZip64=True) File "/usr/lib/python3.6/zipfile.py", line 1131, in __init__ self._RealGetContents() File "/usr/lib/python3.6/zipfile.py", line 1198, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file Processing ./torch_xla-nightly-cp36-cp36m-linux_x86_64.whl ERROR: Exception: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 153, in _main status = self.run(options, args) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 382, in run resolver.resolve(requirement_set) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 201, in resolve self._resolve_one(requirement_set, req) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 365, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 313, in _get_abstract_dist_for req, self.session, self.finder, self.require_hashes File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 194, in prepare_linked_requirement progress_bar=self.progress_bar File "/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py", line 452, in unpack_url unpack_file_url(link, location, download_dir, hashes=hashes) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py", line 416, in unpack_file_url unpack_file(from_path, location, content_type) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py", line 252, in unpack_file flatten=not filename.endswith('.whl') File "/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py", line 114, in unzip_file zip = zipfile.ZipFile(zipfp, allowZip64=True) File "/usr/lib/python3.6/zipfile.py", line 1131, in __init__ self._RealGetContents() File "/usr/lib/python3.6/zipfile.py", line 1198, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file Processing ./torchvision-nightly-cp36-cp36m-linux_x86_64.whl ERROR: Exception: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 153, in _main status = self.run(options, args) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 382, in run resolver.resolve(requirement_set) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 201, in resolve self._resolve_one(requirement_set, req) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 365, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py", line 313, in _get_abstract_dist_for req, self.session, self.finder, self.require_hashes File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 194, in prepare_linked_requirement progress_bar=self.progress_bar File "/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py", line 452, in unpack_url unpack_file_url(link, location, download_dir, hashes=hashes) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py", line 416, in unpack_file_url unpack_file(from_path, location, content_type) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py", line 252, in unpack_file flatten=not filename.endswith('.whl') File "/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py", line 114, in unzip_file zip = zipfile.ZipFile(zipfp, allowZip64=True) File "/usr/lib/python3.6/zipfile.py", line 1131, in __init__ self._RealGetContents() File "/usr/lib/python3.6/zipfile.py", line 1198, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libnvidia-common-440 Use 'apt autoremove' to remove it. The following NEW packages will be installed: libomp5 0 upgraded, 1 newly installed, 0 to remove and 33 not upgraded. Need to get 234 kB of archives. After this operation, 774 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libomp5 amd64 5.0.1-1 [234 kB] Fetched 234 kB in 1s (362 kB/s) Selecting previously unselected package libomp5:amd64. (Reading database ... 144379 files and directories currently installed.) Preparing to unpack .../libomp5_5.0.1-1_amd64.deb ... Unpacking libomp5:amd64 (5.0.1-1) ... Setting up libomp5:amd64 (5.0.1-1) ... Processing triggers for libc-bin (2.27-3ubuntu1) ... /sbin/ldconfig.real: /usr/local/lib/python3.6/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link ```<|||||>Hi! You closed the issue, is it because you solved your problem?
transformers
5,404
closed
How to interpret/act on this warning: "Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM"?
``` model = AutoModelForMaskedLM.from_pretrained("bert-base-cased") ``` Returns this warning ... ``` Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForMaskedLM were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['cls.predictions.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` What is the recommended way to understand/act on this warning message? For example, from what pre-trained model should I use for the MaskedLM task (and how would I know which to use for any other task)?
06-30-2020 17:30:17
06-30-2020 17:30:17
transformers
5,403
closed
How to interpret/act on this warning: "Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
06-30-2020 17:26:27
06-30-2020 17:26:27
transformers
5,402
closed
Help with Debugging TF Common tests
I am a TF2 Noob trying to get TFBart working. Most tests pass besides the ones relying on save_pretrained and pt conversion. Has anybody experienced the following issues? ``` test_tf_compile_model: h5py/h5o.pyx:202: in h5py.h5o.link ... RuntimeError: Unable to create link (name already exists) ``` Or ``` test_pt_tf_model_equivalence: AttributeError: tf_bart_model_9.tf_bart_encoder_9.tf_shared_embeddings_9.weight not found in PyTorch model ``` `transformers-cli env`: ```bash - `transformers` version: 3.0.0 - Platform: Darwin-19.4.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ```
06-30-2020 17:19:30
06-30-2020 17:19:30
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
5,401
closed
Runtime for BERT and Roberta
I'd like to train a BERT model from scratch. Approximately, how long should it take to train 800k sentences (batch size of say 32) on a 10GB GeForce RTX 2080 GPU? If I just fine-tune BERT for 800k sentences for 4 epochs, how long should that take? Are there any benchmarks available except [exacct](https://blog.exxactcorp.com/nvidia-quadro-rtx-6000-bert-large-fine-tune-benchmarks-with-squad-dataset/)? How much faster is RoBerta?
06-30-2020 16:43:43
06-30-2020 16:43:43
Which version of bert or Roberta you want to use base, large ? It also depends on maximum sequence length<|||||>I'd like to use the base version with the maximum sequence length of 128.<|||||>Hi @AkshitaJha , you can run training for few steps 1 or 2, that should give you rough idea of how much time it'll take to finish one epoch<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.