repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 2,491 | closed | Masked tokens are -1 not -100? | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [X] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details)
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
#2130
After haveing updated run_lm_finetuning.py to the newest git branch, I have incountered an error in train(). Haveing spent some time trying to figure it out, I realized, that masked tokens have been changed from -1 to -100. If I change it back to -1 it all works again.
https://github.com/huggingface/transformers/blob/f599623a99b808e3d5926d89cd13237457b9eeba/examples/run_lm_finetuning.py#L179
Won't work:
`
labels[~masked_indices] = -100 # We only compute loss on masked tokens
`
Works:
`
labels[~masked_indices] = -1 # We only compute loss on masked tokens
`
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS:
* Python version:
* PyTorch version:
* PyTorch Transformers version (or branch):
* Using GPU ?
* Distributed or parallel setup ?
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| 01-10-2020 14:20:04 | 01-10-2020 14:20:04 | To fast... Sorry :)
https://github.com/huggingface/transformers/issues/2442 |
transformers | 2,490 | closed | UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 13: character maps to <undefined> | I am trying to export a query to a .csv file, but I am having some issues. Here is the code:
import pandas as pd
import cx_Oracle as cx_Oracle
print("Efetuando login...")
dsn_tns = cx_Oracle.makedsn(r'bdproddr-exad.pitagoras.apollo.br','1521', service_name='bdprodexa')
conn = cx_Oracle.connect(user=r'UserName',password = 'xxxx',dsn=dsn_tns)
print('Usuário logado.')
c = conn.cursor()
print("A extração esta sendo feita, por favor aguardar...")
try:
query = ''' Here goes the SQL code '''
df2 = pd.read_sql(con = conn, sql = query)
finally:
conn.close()
df2.head ()
print ('Exportando dados para arquivo CSV...')
df2.to_csv(r'Z:\1 - EQUIPE_GPA\BASES_AEDU_DM_FAMA\Extração_Diaria\ExtracaoBaseDiaria_DM_AEDU_Pais_Filhos.csv', encoding = 'utf-16')
When I try to run I receive the following error:
**Traceback (most recent call last):
File "C:\Users\gilmar.melo\OneDrive - EDITORA E DISTRIBUIDORA EDUCACIONAL S A\Python\Consultas\ExtracaoBaseDiaria_DM_AEDU_Pais_Filhos.py", line 82, in <module>
df2 = pd.read_sql(con = conn, sql = query)
File "C:\Users\gilmar.melo\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\io\sql.py", line 404, in read_sql
return pandas_sql.read_query(
File "C:\Users\gilmar.melo\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\io\sql.py", line 1658, in read_query
data = self._fetchall_as_list(cursor)
File "C:\Users\gilmar.melo\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\io\sql.py", line 1671, in _fetchall_as_list
result = cur.fetchall()
File "C:\Users\gilmar.melo\AppData\Local\Programs\Python\Python38-32\lib\encodings\cp1252.py", line 15, in decode
return codecs.charmap_decode(input,errors,decoding_table)
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 13: character maps to <undefined>**
I tried a similar code for another query and worked, I only have problem with this particularly.
| 01-10-2020 13:18:18 | 01-10-2020 13:18:18 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,489 | closed | Model trained on Wikipedia Articles | Is there any Model trained on Wikipedia Articles? | 01-10-2020 10:32:33 | 01-10-2020 10:32:33 | IIRC BERT was trained on BookCorpus and Wikipedia<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,488 | closed | NER Pipeline Issue | I am trying to run NER Pipeline. Here, I am using the line, "Statue of Liberty is located in New York". I am getting the following output
[{'entity': 'I-MISC', 'score': 0.5469961762428284, 'word': 'St'},
{'entity': 'I-MISC', 'score': 0.7588933706283569, 'word': '##at'},
{'entity': 'I-MISC', 'score': 0.5194069147109985, 'word': '##ue'},
{'entity': 'I-MISC', 'score': 0.8465802073478699, 'word': 'of'},
{'entity': 'I-PER', 'score': 0.4912404716014862, 'word': 'Liberty'},
{'entity': 'I-LOC', 'score': 0.9995675086975098, 'word': 'New'},
{'entity': 'I-LOC', 'score': 0.999152660369873, 'word': 'York'}]
My Issue is, why is it breaking down to individual words. Is there a way to chunk? | 01-10-2020 10:31:30 | 01-10-2020 10:31:30 | I am suffering from the same problem, trying to recover input texts from `examples/run_ner.py`.<|||||>Agree. I find the use of BIO very unorthodox in this case; if B actually represented the beginning of an entity (vs. the beginning of the new entity of the same type), we could reconstruct these spans ourselves. Currently I don't think it's possible to perfectly reconstruct them, though.<|||||>I believe this issue should be resolved by this recently merged [PR](https://github.com/huggingface/transformers/pull/3957), which allows for the extraction of **entity groups** 🙂 <|||||>Indeed, thanks @enzoampil! |
transformers | 2,487 | closed | "config.json" does not include correct "id2label" and "label2id" after finetuning on NER task | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): xlmroberta
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
I use script `run_ner.py` in order to finetune xlmroberta on conll03 dataset.
The script executed with no problems. But the file "config.json" in the output directory is not correct.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
Task: NER
Dataset: conll03
## To Reproduce
Steps to reproduce the behavior:
1.
I run the script: run_ner.py as follows:
`python run_ner.py --data_dir 0-data --model_type 'xlmroberta' --model_name_or_path 'xlm-roberta-large' --output_dir 1-out --max_seq_length 32 --do_train --do_eval --per_gpu_train_batch_size 8 --no_cuda --evaluate_during_training --logging_steps 1756 --save_steps 1756 --eval_all_checkpoints`
2. Go to the output directory. The file "config.json" contains :
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
and
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
which are not expected in NER
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
I expect that "config.json" contains something like:
"id2label": {
"0": "B-LOC",
"1": "B-MISC",
"2": "B-ORG",
"3": "I-LOC",
"4": "I-MISC",
"5": "I-ORG",
"6": "I-PER",
"7": "O"
},
and
"label2id": {
"B-LOC": 0,
"B-MISC": 1,
"B-ORG": 2,
"I-LOC": 3,
"I-MISC": 4,
"I-ORG": 5,
"I-PER": 6,
"O": 7
},
## Environment
* OS: Ubuntu 18.04
* Python version: 3.6.9
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ? No
* Distributed or parallel setup ? No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| 01-10-2020 10:29:34 | 01-10-2020 10:29:34 | I checked the codes days before and 'label2id' and 'id2label' seemed not used and didn't influence the code execution. |
transformers | 2,486 | closed | Finding the right keras loss and metric for SQuAD | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I am trying to build a simple example with BERT for QA (on SQuAD). The goal is getting it similarly as simple as [the GLUE example from the repository](https://github.com/huggingface/transformers#quick-tour-tf-20-training-and-pytorch-interoperability).
The problem I am facing is finding the accurate loss function and metric. According to the [documentation](https://huggingface.co/transformers/model_doc/bert.html#tfbertforquestionanswering), TFBertForQuestionAnswering will return the logits for a span_start and span_end prediction.
Since we want one single loss for both predictions, one could use the reduced sum of the both categorical crossentropies of the span predictions.
Is that a sensible way to do it?
Is there another, better way?
Can we "stack" losses in keras or is it just not possible?
I am thankful for any help.
For reference: My current state can be found in [this colab notebook](https://colab.research.google.com/drive/1xDpV0z3432mnqdvDC40kMi-KQWxiKPQK)
| 01-10-2020 10:22:15 | 01-10-2020 10:22:15 | In case @jplu has an insight on this?<|||||>Hi @jwallat,
You might have two solutions to solve your issue:
* Implement your own loss function that you can give to the compile method (see the official Tensorflow [doc](https://www.tensorflow.org/api_docs/python/tf/keras/Model?version=stable#compile))
* Implement a custom training loop such as the `train` function in the [NER example](https://github.com/huggingface/transformers/blob/master/examples/run_tf_ner.py#L154) |
transformers | 2,485 | closed | Adds UmBERTo: an Italian Language Model trained with Whole Word Masking | Adds umBERTo to Model architectures list
References and benchmarks:
https://github.com/musixmatchresearch/umberto | 01-10-2020 10:01:11 | 01-10-2020 10:01:11 | Thanks @loretoparisi that is awesome!
We _should_ be able to just load it using `RobertaModel` or `AutoModel`, though. I'll see if we need to make changes to enable this.<|||||>Work in progress on the (remote) tokenizer config in https://github.com/huggingface/transformers/pull/2535
<|||||>@julien-c just checking if there is anything we have to do by our side for this PR. Thank you 🤗 <|||||>[ Umberto Tokenizer ]
Hi @julien-c @thomwolf,
when we try lo load umberto tokenizer with Autotokenizer, this error occurs.
I would like to remember that Umberto Tokenizer inherits from a Roberta Tokenizer
```
>>> tokenizer = AutoTokenizer.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1")
```
```
I0121 16:22:33.957683 139667243427648 tokenization_utils.py:327] Model name 'Musixmatch/umberto-commoncrawl-cased-v1' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1). Assuming 'Musixmatch/umberto-commoncrawl-cased-v1' is a path or url to a directory containing tokenizer files.
I0121 16:22:33.957921 139667243427648 tokenization_utils.py:359] Didn't find file Musixmatch/umberto-commoncrawl-cased-v1/added_tokens.json. We won't load it.
I0121 16:22:33.957994 139667243427648 tokenization_utils.py:359] Didn't find file Musixmatch/umberto-commoncrawl-cased-v1/special_tokens_map.json. We won't load it.
I0121 16:22:33.958091 139667243427648 tokenization_utils.py:359] Didn't find file Musixmatch/umberto-commoncrawl-cased-v1/tokenizer_config.json. We won't load it.
I0121 16:22:34.470488 139667243427648 tokenization_utils.py:398] loading file https://s3.amazonaws.com/models.huggingface.co/bert/Musixmatch/umberto-commoncrawl-cased-v1/vocab.txt from cache at /root/.cache/torch/transformers/d12b9cd215cbedbd1b21cbb1ab8663b6f1990a661d07b4e8ffafab79f02cfc21
I0121 16:22:34.470605 139667243427648 tokenization_utils.py:395] loading file None
I0121 16:22:34.470653 139667243427648 tokenization_utils.py:395] loading file None
I0121 16:22:34.470714 139667243427648 tokenization_utils.py:395] loading file None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_auto.py", line 143, in from_pretrained
return BertTokenizer.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 302, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 438, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_bert.py", line 164, in __init__
"model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file))
ValueError: Can't find a vocabulary file at path '/root/.cache/torch/transformers/d12b9cd215cbedbd1b21cbb1ab8663b6f1990a661d07b4e8ffafab79f02cfc21'. To load the vocabulary from a Google pretrained model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`
```<|||||>[ Umberto Model ]
Also for Umberto model loading some strange things happen.
Same as tokenizer, Umberto Model inherits from Roberta Model, not from BertModel.
```>>> umberto = AutoModel.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1")
I0121 16:24:03.242502 139667243427648 file_utils.py:362] https://s3.amazonaws.com/models.huggingface.co/bert/Musixmatch/umberto-commoncrawl-cased-v1/config.json not found in cache or force_download set to True, downloading to /tmp/tmpehg26ac4
I0121 16:24:03.746079 139667243427648 file_utils.py:377] copying /tmp/tmpehg26ac4 to cache at /root/.cache/torch/transformers/d4d9f43ce6f9f572d223e54ac6184f961c91f180333f42ba436b19060da64177.446c3bed6ceafbbbe9d3b49b0ae276ee73ef572bdfb42fd076c3ee5e6425952e
I0121 16:24:03.746514 139667243427648 file_utils.py:381] creating metadata file for /root/.cache/torch/transformers/d4d9f43ce6f9f572d223e54ac6184f961c91f180333f42ba436b19060da64177.446c3bed6ceafbbbe9d3b49b0ae276ee73ef572bdfb42fd076c3ee5e6425952e
I0121 16:24:03.747002 139667243427648 file_utils.py:390] removing temp file /tmp/tmpehg26ac4
I0121 16:24:03.747236 139667243427648 configuration_utils.py:185] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/Musixmatch/umberto-commoncrawl-cased-v1/config.json from cache at /root/.cache/torch/transformers/d4d9f43ce6f9f572d223e54ac6184f961c91f180333f42ba436b19060da64177.446c3bed6ceafbbbe9d3b49b0ae276ee73ef572bdfb42fd076c3ee5e6425952e
I0121 16:24:03.747532 139667243427648 configuration_utils.py:199] Model config {
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 1,
"use_bfloat16": false,
"vocab_size": 32005
}
I0121 16:24:04.271546 139667243427648 file_utils.py:362] https://s3.amazonaws.com/models.huggingface.co/bert/Musixmatch/umberto-commoncrawl-cased-v1/pytorch_model.bin not found in cache or force_download set to True, downloading to /tmp/tmp6e0jg2r3
I0121 16:25:02.123574 139667243427648 file_utils.py:377] copying /tmp/tmp6e0jg2r3 to cache at /root/.cache/torch/transformers/faef12cb7f68b2ecafeaed7e33b6fcdbb1772a607d0b602e244d5eab2e5e6dbc.38b3b456347cfe45fa37a33fe149652d871db4c2f36947993be2c6efbdce9356
I0121 16:25:02.441211 139667243427648 file_utils.py:381] creating metadata file for /root/.cache/torch/transformers/faef12cb7f68b2ecafeaed7e33b6fcdbb1772a607d0b602e244d5eab2e5e6dbc.38b3b456347cfe45fa37a33fe149652d871db4c2f36947993be2c6efbdce9356
I0121 16:25:02.441466 139667243427648 file_utils.py:390] removing temp file /tmp/tmp6e0jg2r3
I0121 16:25:02.484607 139667243427648 modeling_utils.py:406] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/Musixmatch/umberto-commoncrawl-cased-v1/pytorch_model.bin from cache at /root/.cache/torch/transformers/faef12cb7f68b2ecafeaed7e33b6fcdbb1772a607d0b602e244d5eab2e5e6dbc.38b3b456347cfe45fa37a33fe149652d871db4c2f36947993be2c6efbdce9356
I0121 16:25:03.654651 139667243427648 modeling_utils.py:480] Weights of BertModel not initialized from pretrained model: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias']
I0121 16:25:03.654866 139667243427648 modeling_utils.py:483] Weights from pretrained model not used in BertModel: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']```<|||||>Hi @loretoparisi and all,
- I've added a `"model_type": "camembert"` to both your config.json files on our S3, so tokenizer is now properly instantiated as a CamembertTokenizer (i.e. admit a `sentencepiece.bpe.model` file): https://s3.amazonaws.com/models.huggingface.co/bert/Musixmatch/umberto-commoncrawl-cased-v1/config.json
- I've uploaded the two `sentencepiece.bpe.model` files from your repo.
**So, doing the following should now work out of the box:**
```
tokenizer = AutoTokenizer.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1")
umberto = AutoModel.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1")
```
(same thing for the wikipedia model)
Can you check that it works fine now? I'll add shortcut names in a separate commit as the PR will be much shorter.
Finally, can you add a README.md file to the same folders on our S3, and it will be rendered on your model's page: https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1
You can use this file to describe your model, which datasets did you train on, eval results, etc.
Grazie mille!<|||||>@julien-c 👍 great! We are applying the changes! cc @simonefrancia<|||||>1) We tested `AutoTokenizer` and `AutoModel` with both `Musixmatch/umberto-commoncrawl-cased-v1` and `Musixmatch/umberto-wikipedia-uncased-v1` and this code worked:
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("[name_tokenizer]") # do_lower_case=True if uncased
umberto = AutoModel.from_pretrained("[name_model]")
encoded_input = tokenizer.encode("Umberto Eco è stato un grande scrittore")
input_ids = torch.tensor(encoded_input).unsqueeze(0) # Batch size 1
outputs = umberto(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output
```
2) for the error [here](https://github.com/huggingface/transformers/pull/2661), we do testing in the same way and no error appears to us:
<img width="1676" alt="Schermata 2020-01-28 alle 10 48 59" src="https://user-images.githubusercontent.com/7140210/73260860-27325d00-41cb-11ea-9ba8-162d820559bc.png">
So probably it's a Heisenbug
3) We did two README.md, one for each model:
`Umberto-commoncrawl-cased` : [link](https://mxmdownloads.s3.amazonaws.com/umberto/README_UMBERTO_COMMONCRAWL.MD)
`Umberto-wikipedia-uncased` : [link](https://mxmdownloads.s3.amazonaws.com/umberto/README_UMBERTO_WIKIPEDIA.MD)
That's all, if you need other, here we are cc @loretoparisi . Thanks!<|||||>Great, I've uploaded the READMEs to our S3 so that they'll be displayed on the model pages.
I've also uploaded this tokenizer_config.json so you don't need to specify `do_lower_case: true` anymore: https://s3.amazonaws.com/models.huggingface.co/bert/Musixmatch/umberto-wikipedia-uncased-v1/tokenizer_config.json
Finally, I've updated your READMEs slightly to:
- add more info from your repo's readme
- add an example use case for our new FillMaskPipeline:
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="Musixmatch/umberto-wikipedia-uncased-v1",
tokenizer="Musixmatch/umberto-wikipedia-uncased-v1"
)
result = fill_mask("Umberto Eco è <mask> un grande scrittore")
```
I'll close this issue and merge #2661
Thanks again!<|||||>@julien-c Hi, we saw from https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1 that we don't have the Tensorflow version of our model available for the community. How can we create and upload it? Thanks<|||||>Hi @simonefrancia, check out this comment: https://github.com/huggingface/transformers/issues/2901#issuecomment-591710959 |
transformers | 2,484 | closed | Import issues in run_squad_w_distillation | ## 🐛 Bug
<!-- Important information -->
Model I am using : DistilBert
Language I am using the model on : English
The problem arise when using:
* [x] the official example scripts: run_squad_w_distillation.py in examples/distillation
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQuAD
## To Reproduce
Steps to reproduce the behavior:
1. python run_squad_w_distillation.py --help
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
It should show the input arguments required for the code to run
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: CentOS Linux
* Python version: 3.6.9
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU: yes
* Distributed or parallel setup: None
* Any other relevant information:
## Additional context
```python
Traceback (most recent call last):
File "run_squad_w_distillation.py", line 51, in <module>
from ..utils_squad import (
ValueError: attempted relative import beyond top-level package
```
There is no utils_squad or utils_squad_evaluate files present in the repo but are imported in run_squad_w_distillation.py file. How to solve this?
Is there any release planned for distill version of squad2.0 like the one released on squad 1.1?
<!-- Add any other context about the problem here. -->
| 01-10-2020 05:01:58 | 01-10-2020 05:01:58 | |
transformers | 2,483 | closed | Removing pretrained layers? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I'm currently trying to use a pretrained BertModel for finetuning but I want to remove some of the layers from the model before fine-tuning.
How do I do this? | 01-09-2020 21:09:16 | 01-09-2020 21:09:16 | If this is important to anyone, I have found a solution:
```
def deleteEncodingLayers(model, num_layers_to_keep): # must pass in the full bert model
oldModuleList = model.bert.encoder.layer
newModuleList = nn.ModuleList()
# Now iterate over all layers, only keepign only the relevant layers.
for i in range(0, len(num_layers_to_keep)):
newModuleList.append(oldModuleList[i])
# create a copy of the model, modify it with the new list, and return
copyOfModel = copy.deepcopy(model)
copyOfModel.bert.encoder.layer = newModuleList
return copyOfModel
```<|||||>Hi,
Thank you for your question and solution. I also want to try such kind of thing.
I have a question. If I remove some layers, do I need to do pre-train from scratch again?
How does the performance look if you only do finetuning on GLUE or Squad tasks? Does the accuracy go down dramatically?
Thanks,
ZLK<|||||>@ZLKong no, the remaining layers will remain trained. Not quite sure what you mean by only fine-tuning though.<|||||>Thank you for your reply!
I want to decrease the FLOPS by simply removing some layers from the model. I want to see if I remove some layers, how much will if effect the accuracy of SQUAD task.
(If the accuracy goes down a lot, that means I might have do the pretraining again?)
Do you have any experiments on this?
Best,
ZLK
<|||||>I haven't, but I'm sure in the original paper they performed a test like that. If not, I guarantee there will be a paper out there that does given how much research has been chucked at bert :)<|||||>OK, I will look if there are any papers about it. I will run some testings, too.
Thank you very much!<|||||>If you're dealing with loading a pretrained model, there is an easier way to remove the top layer:
```
config = XLNetConfig.from_pretrained(checkpoint)
config.n_layer = 29 #was 30 layers, in my case
model = XLNetModel.from_pretrained(checkpoint, config = config)
```
This will produce a warning that there are unused weights in the checkpoint and you'll get a model with the top layer removed.<|||||>@ZLKong have you found any papers yet?:D
EDIT: I found this paper from March 2021: [On the Effect of Dropping Layers of Pre-trained Transformer Models](https://arxiv.org/abs/2004.03844)
<|||||>> If this is important to anyone, I have found a solution:
>
> ```
> def deleteEncodingLayers(model, num_layers_to_keep): # must pass in the full bert model
> oldModuleList = model.bert.encoder.layer
> newModuleList = nn.ModuleList()
>
> # Now iterate over all layers, only keepign only the relevant layers.
> for i in range(0, len(num_layers_to_keep)):
> newModuleList.append(oldModuleList[i])
>
> # create a copy of the model, modify it with the new list, and return
> copyOfModel = copy.deepcopy(model)
> copyOfModel.bert.encoder.layer = newModuleList
>
> return copyOfModel
> ```
Hello there,
I still don't know how to implement this. Does this just need to call the pre-trained model, for example: BERT model from TensorFlow
or
I need the full code BERT model?
thank you<|||||>hi @officialpatterson, thanks for providing the solution! now I'm trying to implement it with the BertModel package which doesn't have the same attributes as yours, anyway I can adapt this code to my model?
```
Class BERTClass(torch.nn.Module):
def __init__(self):
super(BERTClass, self).__init__()
self.bert_model = BertModel.from_pretrained('bert-base-cased')
self.dropout = torch.nn.Dropout(0.5)
self.linear = torch.nn.Linear(768, 9)
def forward(self, input_ids, attn_mask, token_type_ids):
output = self.bert_model(
input_ids,
attention_mask=attn_mask,
token_type_ids=token_type_ids
)
output_dropout = self.dropout(output.pooler_output)
output = self.linear(output_dropout)
return output
```<|||||>Not sure if anyone is looking for a way to remove layers for `EncoderDecoderModel` e.g. for[ some models with unbalance layers](https://aclanthology.org/2020.amta-research.10/). I've tried this, and it seems to work:
```python
from transformers import EncoderDecoderModel, BertLMHeadModel
from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel
# Initializing a BERT bert-base-uncased style configuration
config_encoder = BertConfig.from_pretrained("bert-base-multilingual-uncased")
config_decoder = BertConfig.from_pretrained("bert-base-multilingual-uncased")
config_encoder.num_hidden_layers = 5
config_decoder.num_hidden_layers = 2
config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
# Initializing a Bert2Bert model from the bert-base-uncased style configurations
model = EncoderDecoderModel(config=config)
model.decoder # Shows 2 layers, if `num_hidden_layers` was unchanged, it should show 6.
```
[out]:
```
BertLMHeadModel(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(105879, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(crossattention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(crossattention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
)
(cls): BertOnlyMLMHead(
(predictions): BertLMPredictionHead(
(transform): BertPredictionHeadTransform(
(dense): Linear(in_features=768, out_features=768, bias=True)
(transform_act_fn): GELUActivation()
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(decoder): Linear(in_features=768, out_features=105879, bias=True)
)
)
)
```
----
Similarly, if it's just an LM encoder model, something like this should work:
```python
from transformers import BertConfig, BertLMHeadModel
config_encoder = BertConfig.from_pretrained("bert-base-multilingual-uncased")
config_encoder.num_hidden_layers = 3
model = BertLMHeadModel(config=config_encoder)
model
```
[out]:
```
BertLMHeadModel(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(105879, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
)
(cls): BertOnlyMLMHead(
(predictions): BertLMPredictionHead(
(transform): BertPredictionHeadTransform(
(dense): Linear(in_features=768, out_features=768, bias=True)
(transform_act_fn): GELUActivation()
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(decoder): Linear(in_features=768, out_features=105879, bias=True)
)
)
)
``` |
transformers | 2,482 | closed | model.generate should support past as an input | ## 🚀 Feature
the `model.generate` method should support `past` as an input (and return the hidden states so that the next time it can inject past)
| 01-09-2020 20:59:51 | 01-09-2020 20:59:51 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,481 | closed | [closed] cls token in XLM | ## 🐛 Bug
The CLS token in XLM should be \<s\> rather than \</s\> in the current repo.
Here is the XLM's original BOS_WORD:
https://github.com/facebookresearch/XLM/blob/master/src/data/dictionary.py#L17
In the transformers' repo, cls_token is set to \</s\>.
https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py#L562
And the cls token is used as the BOS token. This is not the same as the original one.
https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py#L824
Model I am using (Bert, XLNet....): XLM
| 01-09-2020 20:27:42 | 01-09-2020 20:27:42 | I find that the first token in the original XLM is indeed using \</s\> rather than \<s\>. |
transformers | 2,480 | closed | BERT add_token function not modify bias size | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* the official example scripts: modeling_bert.py
The tasks I am working on is:
* my own task or dataset: fine-tuning Bert with added new tokens to vocabulary
## To Reproduce
Steps to reproduce the behavior:
Running "run_lm_finetuning.py" with added tokens to vocabulary.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```
new_vocab_list = ['token_1', 'token_2', 'token_3']
tokenizer.add_tokens(new_vocab_list)
logger.info("vocabulary size after adding: " + str(len(tokenizer)))
model.resize_token_embeddings(len(tokenizer))
logger.info("size of model.cls.predictions.bias: " + str(len(model.cls.predictions.bias)))
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
* The result should be:
vocabulary size after adding: 31119
size of model.cls.predictions.bias: 31119
* But actually the result is:
vocabulary size after adding: 31119
size of model.cls.predictions.bias: 31116
## Environment
* OS: Ubuntu
* Python version: 3.6
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.2.1
* Using GPU: yes
* Distributed or parallel setup: no
## Additional context
<!-- Add any other context about the problem here. -->
I have found the problem to be: for BERT model, the class "BertLMPredictionHead" has two separate attributes "decoder" and "bias". When adding new tokens, the code "model.resize_token_embeddings(len(tokenizer))" only updates the size of "decoder" and its bias if it has (this bias is different from the "BertLMPredictionHead.bias"). The attribute "BertLMPredictionHead.bias" is not updated and therefore, causes the error.
I have added the updating-bias code in my "modeling_bert.py". And if you want, I can merge my branch to your code. However, if I misunderstand something, please notice me too.
Thank you very much for your code base. | 01-09-2020 19:55:25 | 01-09-2020 19:55:25 | Hi, I've pushed a fix that was just merged in `master`. Could you please try and install from source:
```py
pip install git+https://github.com/huggingface/transformers
```
and tell me if you face the same error?<|||||>Having follow your reply from here (https://github.com/huggingface/transformers/issues/2513#issuecomment-574406370) it now works :)
Needed to update `run_lm_finetuning.py` to latest github branch - thanks :)<|||||>Hi @LysandreJik . Thank you for the update but the error has not been solved I'm afraid. Following are the error returned:
```
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/torch/_utils.py", line 385, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/transformers/modeling_bert.py", line 889, in forward
prediction_scores = self.cls(sequence_output)
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/transformers/modeling_bert.py", line 461, in forward
prediction_scores = self.predictions(sequence_output)
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/sdcc/u/hvu/.conda/envs/torch/lib/python3.6/site-packages/transformers/modeling_bert.py", line 451, in forward
hidden_states = self.decoder(hidden_states) + self.bias
RuntimeError: The size of tensor a (31119) must match the size of tensor b (31116) at non-singleton dimension 2
```
I have solved the problem myself by implementing this piece of code in the method `def _tie_or_clone_weights(self, output_embeddings, input_embeddings)` in _modeling_utils.py_:
```
# Update bias size if has attribuate bias
if hasattr(self, "cls"):
self.cls.predictions.bias.data = torch.nn.functional.pad(
self.cls.predictions.bias.data,
(0, self.config.vocab_size - self.cls.predictions.bias.shape[0]),
"constant",
0,
)
```
<|||||>@HuyVu0508 Try update this file
https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py
It should be somewhere "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py"<|||||>Looks like this is probably a duplicate of #1730
Also, there is a temp solution posted here.
https://github.com/huggingface/transformers/issues/1730#issuecomment-550081307<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,479 | closed | Implement Layer-wise Relevance Propagation (LRP) for prediction explanation | ## 🚀 Feature
Example Code:
https://github.com/lena-voita/the-story-of-heads/blob/master/lib/layers/attn.py#L154
## Motivation
The motivation is prediction explainability to be able to generate pictures like:

or

more motivation: http://www.heatmapping.org/slides/2019_ICCV.pdf
| 01-09-2020 19:53:07 | 01-09-2020 19:53:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I second this! :+1: |
transformers | 2,478 | closed | ImportError: No module named 'transformers' | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I have installed transformers by "pip install transformers command"
However, when I tried to use it, it says no module.
| 01-09-2020 19:20:00 | 01-09-2020 19:20:00 | When you enter the command "python" what is the output? and what environment are you using? linux/Windows/mac/etc?
Also, could you copy the exact output of "pip install transformers" so that we can see?<|||||>Python 3.7.3 (default, Apr 24 2019, 15:29:51) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
Warning:
This Python interpreter is in a conda environment, but the environment has
not been activated. Libraries may fail to load. To activate this environment
please see https://conda.io/activation
Type "help", "copyright", "credits" or "license" for more information.
>>>
I am working on Windows10
If I activate the virtual environment, then warning is gone.
Requirement already satisfied: transformers in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (2.3.0)
Requirement already satisfied: sacremoses in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from transformers) (0.0.38)
Requirement already satisfied: tqdm in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from transformers) (4.32.1)
Requirement already satisfied: boto3 in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from transformers) (1.10.49)
Requirement already satisfied: numpy in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from transformers) (1.16.4)
Requirement already satisfied: sentencepiece in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from transformers) (0.1.85)
Requirement already satisfied: requests in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from transformers) (2.22.0)
Requirement already satisfied: regex!=2019.12.17 in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from transformers) (2020.1.8)
Requirement already satisfied: joblib in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from sacremoses->transformers) (0.13.2)
Requirement already satisfied: click in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from sacremoses->transformers) (7.0)
Requirement already satisfied: six in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from sacremoses->transformers) (1.12.0)
Requirement already satisfied: s3transfer<0.3.0,>=0.2.0 in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from boto3->transformers) (0.2.1)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from boto3->transformers) (0.9.4)
Requirement already satisfied: botocore<1.14.0,>=1.13.49 in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from boto3->transformers) (1.13.49)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from requests->transformers) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from requests->transformers) (2.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from requests->transformers) (1.24.2)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from requests->transformers) (2019.6.16)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1; python_version >= "2.7" in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from botocore<1.14.0,>=1.13.49->boto3->transformers) (2.8.0)
Requirement already satisfied: docutils<0.16,>=0.10 in c:\users\john\miniconda3\envs\my_bert\lib\site-packages (from botocore<1.14.0,>=1.13.49->boto3->transformers) (0.14)
<|||||>I'm not familiar with Conda, have you tried working with it via the native environment i.e. don't use conda so you can see if its conda thats causing this problem?
My first thoughts is that the pip installer is installing the module correctly, but the python interpreter is pointed to a different location. This usually happens on OSX when I call "pip transformers" which installs under python 2.7 but when I use Python3 the module is missing. <|||||>Well, you have to activate the environment, then install pytorch/transformers, and then (still in the activated env) run your Python code. It is clear from your problem that you are not running the code where you installed the libraries.
If you really can't figure it out, you can try to install with `python -m pip install transforlers` instead of `pip install`. That will ensure that the same `python` executable is used.<|||||>Actually, I have installed transformers in that env. I just did it one more time as you suggested on.
C:\Users\John\Desktop\python\data_analysis\disaster>activate my_bert
(my_bert) C:\Users\John\Desktop\python\data_analysis\disaster>python -m pip install transformers
But, still, I got an error message from jupyter notebook when I imported transformers.
ImportError Traceback (most recent call last)
<ipython-input-1-279c49635b32> in <module>()
----> 1 import transformers
ImportError: No module named 'transformers'<|||||>Then you are not launching jupyter from the same environment/python installation as where you installed transformers.<|||||>You could write the command `!which pip` in your jupyter notebook to make sure you're using the correct environment, followed by `!pip list` to make sure ` transformers` is correctly installed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Have you solved the problem<|||||>If your python version is 3.x try using "_pip3 install transformers_".<|||||>> If your python version is 3.x try using "_pip3 install transformers_".
Not necessarily. Depends on your environment/OS. <|||||>When I run `pip list` I see
> transformers 4.8.2
But I'm still getting "ModuleNotFoundError: No module named 'transformers'"<|||||>Discrepancy between pip and python. Can you also see transformers when running python -m pip list? <|||||>That was the issue. I had to install everything with `python -m pip` rather than default conda pip.<|||||>> I'm not familiar with Conda, have you tried working with it via the native environment i.e. don't use conda so you can see if its conda thats causing this problem?
>
> My first thoughts is that the pip installer is installing the module correctly, but the python interpreter is pointed to a different location. This usually happens on OSX when I call "pip transformers" which installs under python 2.7 but when I use Python3 the module is missing.
I am currently having this problem when running on OSX. What did you do to fix this?<|||||>@austinbyersking `pip3 install transformers` worked for me on macOS. I suggest you create an environment in `conda` and then install using `pip3`<|||||>I have this problem with `jupyter lab`. My OS is Windows 10 and python 3.8.8.
I can use `transformers` in my python interpreter but not in `jupyter lab`, of course I'm in the same virtual environment where transformers is installed with pip.
`pip list`, ` pip freeze` or `python -m pip list` all show `transformers 4.16.2`<|||||>Similar issue as @looninho except that my OS is Ubuntu 18.04 and python 3.8.0. The fix that worked for me was to install transformers with sudo privilege (sudo pip install transformers). I guess using --user would also do the same. And also uninstall conda transformer installation, if any.<|||||>On ubuntu 20.04 with conda env it work after I closed the terminal and in a new terminal i have activated again the env:
`conda activate colab-script`<|||||>> Well, you have to activate the environment, then install pytorch/transformers, and then (still in the activated env) run your Python code. It is clear from your problem that you are not running the code where you installed the libraries.
>
> If you really can't figure it out, you can try to install with `python -m pip install transforlers` instead of `pip install`. That will ensure that the same `python` executable is used.
i meet same problem and this advise solved it. Thank you. |
transformers | 2,477 | closed | TFDistilBERT ValueError when loading a saved model and running model.predict(), same with any sequence classification model in tensorflow | This issue happens when I save and reload a model. I am trying to distinguish between fake text and real text, and everything works just fine.
When I save and reload the model elsewhere, model.predict() gives me a value error, and I have to run model.fit() AGAIN otherwise it continues to raise a ValueError.
> ValueError: Please provide model inputs as a list or tuple of 2 or 3 elements: (input, target) or (input, target, sample_weights) Received tf.Tensor([100], shape=(1,), dtype=int64)
Here is the code that works:
```
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', num_labels=2)
real_path = '/data/brabel1/vtj/4_facebook/model_data/cleaned_messages.txt'
fake_path = '/data/brabel1/vtj/4_facebook/model_data/fake.txt'
real = open(real_path, 'r')
fake = open(fake_path, 'r')
real_input_ids = tf.keras.preprocessing.sequence.pad_sequences([tokenizer.encode(line) for line in real.readlines()],
maxlen=256, dtype="int", truncating="post", padding="post")
fake_input_ids = tf.keras.preprocessing.sequence.pad_sequences([tokenizer.encode(line) for line in fake.readlines()],
maxlen=256, dtype="int", truncating="post", padding="post")
FILE_NAMES=[real_input_ids, fake_input_ids]
def labeler(example, index):
return example, tf.cast(index, tf.int64)
labeled_data_sets = []
for i, file_name in enumerate(FILE_NAMES):
lines_dataset = tf.data.Dataset.from_tensor_slices(file_name)
labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i))
labeled_data_sets.append(labeled_dataset)
BUFFER_SIZE = 100000
BATCH_SIZE = 32
TAKE_SIZE = 1800
all_labeled_data = labeled_data_sets[0]
for labeled_dataset in labeled_data_sets[1:]:
all_labeled_data = all_labeled_data.concatenate(labeled_dataset)
all_labeled_data = all_labeled_data.shuffle(
BUFFER_SIZE, reshuffle_each_iteration=False)
train_data = all_labeled_data.skip(TAKE_SIZE).shuffle(BUFFER_SIZE)
train_data = train_data.padded_batch(BATCH_SIZE, padded_shapes=([-1],[]))
test_data = all_labeled_data.take(TAKE_SIZE)
test_data = test_data.padded_batch(BATCH_SIZE, padded_shapes=([-1],[]))
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
model.fit(train_data, validation_data=test_data, epochs=5)
model.predict(tokenizer.encode(["this is a test sentence, no value errors here!"]))
```
HOWEVER, the following saving and reloading of the model results in a ValueError:
```
model.save_pretrained('saved_models/fucky_bert')
del model
model = TFDistilBertForSequenceClassification.from_pretrained('saved_models/fucky_bert')
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
model.predict(tokenizer.encode(["Why am I getting a value error now???"]))
```
The only thing I've found that works is to train this loaded model for a single epoch, and then no value error.
What is going on here? | 01-09-2020 18:34:17 | 01-09-2020 18:34:17 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,476 | closed | DistilBertTokenizer defaults to tokenize_chinese_chars=True | ## 🐛 Bug
<!-- Important information -->
Model I am using DistilBert:
Language I am using the model on English
The problem arise when using:
* [ ] the official example [run_tf_ner.py](https://github.com/huggingface/transformers/blob/master/examples/run_tf_ner.py) scripts
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: SST-2
## To Reproduce
Steps to reproduce the behavior:
1. run run_tf_ner.py
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
I am expecting DistilBertTokenizer to have a tokenize_chinese_chars=False but because it extends BertTokenizer, the default is set to be tokenize_chinese_chars=True
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Ubuntu 18.04
* Python version: 3.7.6
| 01-09-2020 18:32:24 | 01-09-2020 18:32:24 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,475 | closed | help... | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
---------------------------------------------------------------------------
i saw great example in (https://huggingface.co/transformers/main_classes/model.html?highlight=from_pretrained#pretrainedmodel) but i got an error please help

config = BertConfig.from_json_file('./tf_model/my_tf_model_config.json')
model = BertModel.from_pretrained('./tf_model/my_tf_checkpoint.ckpt.index', from_tf=True, config=config)
i followed this 2 lines code but get error please help...

here is my code

| 01-09-2020 17:14:38 | 01-09-2020 17:14:38 | I haven't worked on TF code like this personally, but by looking [https://github.com/huggingface/transformers/blob/master/README.md#quick-tour-tf-20-training-and-pytorch-interoperability](url) It shows that they don't override the config like you have done.
Now if that doesn't work - which I don't think it will work to be fair - my guess is that the model file your attempting to load is of type `BertModel` when it should be ` TFBertForSequenceClassification`
Have a look at the link and let us know how you get on.<|||||>Please also change the title of this issue to something meaningful.<|||||>First of all: please change your title and please post code snippets in tags and not images. They load slow, are hard to read, and impossible to copy-paste - just plain annoying. :-)
Second, it seems that your checkpoint contains additional layers, particularly a classifier layer. So you probably want to load the weights into another model architecture. Probably one of these (instead of just `BertModel`):
- BertForSequenceClassification
- BertForTokenClassification
- BertForMultipleChoice<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,474 | closed | ALBERT tokenizer : local variable 'tokenizer' referenced before assignment | ## 🐛 Bug
<!-- Important information -->
Model I am using BERT and ALBERT
Language I am using the model on: English
The problem arise when using:
* my own script
The tasks I am working on is:
* my own task or dataset: text classification
## To Reproduce
Steps to reproduce the behavior:
```$ pip install transformers```
This installs version 2.3.0
```
>>> from transformers import AlbertTokenizer
>>> tokenizer = AlbertTokenizer.from_pretrained("albert-base")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 302, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 444, in _from_pretrained
tokenizer.init_inputs = init_inputs
UnboundLocalError: local variable 'tokenizer' referenced before assignment
```
## Expected behavior
It works perfectly with Bert, e.g.:
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
```
## Environment
* OS: MacOsX 10.14
* Python version: 3.7.5
* TensorFlow version: 2.0
* Using GPU ? No
* Distributed or parallel setup ? None
| 01-09-2020 16:03:20 | 01-09-2020 16:03:20 | The error is misleading, I’ll fix that. Your error stems from the tokenizer initialization: there is no pretrained checkpoint called `albert-base`, only `albert-base-v1` or `albert-base-v2`.
You can check the list of pretrained checkpoints [here](https://huggingface.co/transformers/pretrained_models.html).<|||||>Oh, I see. I confirm it works correctly if I load the model `albert-base-v2`. Thanks for taking care of improving the error message!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,473 | closed | Using Transformer Library for code prediction | Dear all,
I am new in exploring transformer library. I would like to use transformer models to train on my own text corpus (.txt) containing C++ source code tokens seperated with space characters. I would like to provide tokenized C++ source code files from multiple repositories in textual format (.txt) , and function should give me trained models with accuracy results, which I can use for code prediction latter on.
I have came accross with [Deep TabNine](https://tabnine.com/blog/deep/), which has used GPT2. But, I donot know about the following:
1. How could I train tranformer's library GPT2 model for C++ tokenized code?
2. Can I use all transformer models such as BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet etc to train on my own C++ tokenized code?
3. If no, which one can be used which one cannot and how?
4. Is it advisable to use transformer pretrained models by fine tunning over my own textual corpus of C++ tokens? or should I build trained models from the scratch by using transformer library?
Please let me know about it.
| 01-09-2020 12:47:25 | 01-09-2020 12:47:25 | You should probably train a model from scratch.
Here a few links that are relevant:
- our blog post on [how to train a model from scratch](https://huggingface.co/blog/how-to-train) using `transformers` and `tokenizers`.
- specifically on the topic of code, we just uploaded [CodeBERTa](https://huggingface.co/huggingface/CodeBERTa-small-v1#codeberta), a model pretrained on the `CodeSearchNet` dataset from GitHub (+ fine-tuned to a classification task)
Let us know how it goes. |
transformers | 2,472 | closed | Pytorch T5 does not run on GPU | ## 🐛 Bug
When I try to run T5 from the latest transformers version (and also from the most recent git version) on the GPU, I get the following error:
```
Traceback (most recent call last):
File "T5_example.py", line 32, in <module>
outputs = model(input_ids=input_ids)
File "/home/reimers/anaconda3/envs/sbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/reimers/sbert/transformers/src/transformers/modeling_t5.py", line 780, in forward
File "/home/reimers/anaconda3/envs/sbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/reimers/sbert/transformers/src/transformers/modeling_t5.py", line 616, in forward
encoder_decoder_position_bias=encoder_decoder_position_bias,
File "/home/reimers/anaconda3/envs/sbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/reimers/sbert/transformers/src/transformers/modeling_t5.py", line 422, in forward
self_attention_outputs = self.layer[0](
File "/home/reimers/anaconda3/envs/sbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/reimers/sbert/transformers/src/transformers/modeling_t5.py", line 373, in forward
attention_output = self.SelfAttention(
File "/home/reimers/anaconda3/envs/sbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/reimers/sbert/transformers/src/transformers/modeling_t5.py", line 338, in forward
raise ValueError("No position_bias provided and no weights to compute position_bias")
File "/home/reimers/sbert/transformers/src/transformers/modeling_t5.py", line 289, in compute_bias
values = self.relative_attention_bias(rp_bucket)
File "/home/reimers/anaconda3/envs/sbert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/reimers/anaconda3/envs/sbert/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/reimers/anaconda3/envs/sbert/lib/python3.7/site-packages/torch/nn/functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select
```
This is the example code to reproduce the problem:
```
from transformers import T5Model, T5Tokenizer
import torch
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5Model.from_pretrained('t5-small')
model = model.to('cuda')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute"), device='cuda').unsqueeze(0)
outputs = model(input_ids=input_ids)
last_hidden_states = outputs[0]
```
The error is in the file modeling_t5.py at line 284-289:
```
rp_bucket = self._relative_position_bucket(
relative_position, # shape (qlen, klen)
bidirectional=not self.is_decoder,
num_buckets=self.relative_attention_num_buckets,
)
values = self.relative_attention_bias(rp_bucket) # shape (qlen, klen, num_heads)
```
rp_bucket is a tensor on the CPU, which causes the above error.
If I move rp_bucket to the GPU, the code works correctly on the GPU:
```
rp_bucket = self._relative_position_bucket(
relative_position, # shape (qlen, klen)
bidirectional=not self.is_decoder,
num_buckets=self.relative_attention_num_buckets,
)
rp_bucket = rp_bucket.to('cuda') #Dirty quick fix
values = self.relative_attention_bias(rp_bucket) # shape (qlen, klen, num_heads)
```
I'm not sure why rp_bucket is on the CPU. | 01-09-2020 12:36:16 | 01-09-2020 12:36:16 | I can also confirm that T5 runs on CPU but not GPU -- thanks for the hack fix, will use that until GPU tensor is fixed. <|||||>Hi,
I was planning to run some examples with T5 on GPU. Is this already been fixed on GPU ?<|||||>@mohammedayub44, in v2.5.0 it works without any issue, I guess yes.
```
print (last_hidden_states)
tensor([[[ 9.2098e-02, 1.1048e-01, 2.6714e-02, ..., 1.2918e-02,
6.1260e-05, 9.5352e-02],
[ 8.7042e-02, 8.3914e-02, 6.9337e-02, ..., -3.9229e-02,
3.3525e-04, 1.4291e-01],
[ 9.6290e-02, -4.8915e-03, 5.5687e-02, ..., -1.0703e-01,
6.4940e-04, -2.1393e-01],
[-3.0119e-03, 1.1048e-01, 3.0696e-03, ..., -5.1768e-02,
3.5166e-04, 1.5510e-01],
[-6.3620e-02, 5.4474e-02, -1.8415e-02, ..., -8.4559e-02,
6.1696e-04, 5.8805e-02],
[-6.0232e-02, 1.3885e-01, 7.9865e-03, ..., -4.9981e-02,
4.3370e-04, 4.4865e-02]]], device='cuda:0', grad_fn=<MulBackward0>)
```<|||||>Great I'll check it out. Thanks. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Has this been corrected? I'm on version 2.8.0 on a GCP AI Platform Notebook using the PyTorch:1.4 image and I'm still getting this error.
`
cuda0 = torch.device('cuda:0')
tokenized_text = tokenizer.encode(t5_prepared_Text, return_tensors="pt", max_length=512).to(cuda0)
summary_ids = model.generate(tokenized_text,
num_beams=2,
no_repeat_ngram_size=2,
min_length=50,
max_length=100,
early_stopping=True, )
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select
`<|||||>I can confirm also having this issue on 2.10.0<|||||>```
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select
```
```
model.to(DEVICE)
model.train()
input_ids.to(DEVICE)
lm_labels.to(DEVICE)
loss = model(input_ids=input_ids, lm_labels=lm_labels)[0]
loss.backward()
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,471 | closed | Using T5 | How can i use T5 model like in the paper , input to the model "Machine Translation #Some Text#" and it outputs its translation ?
| 01-09-2020 11:39:13 | 01-09-2020 11:39:13 | I am also looking for inference example. I tried using GPT-2 style inference but it does not work at all<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,470 | closed | How pipeline can use a ner finetuned model from a local directory ? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi,
I fine tuned XLM-Roberta on my own NER dataset so I got a folder containing pytorch_model.bin and all the other stuff.
But the problem is that I do not figure out how to use this model with the pipeline.
Below is an example of how I used it and the generated error:
Usage:
nlp = pipeline('ner',model= XLMRobertaForTokenClassification.from_pretrained('./1-out/checkpoint-24924/'))
Error:
OSError: Model name './1-out/checkpoint-24924/' was not found in model name list (xlm-roberta-base, ...)
Instead of XLMRobertaForTokenClassification I tried AutoModel and PreTrainedModel but I still get the same error. I also added tokenizer=AutoTokenizer.from_pretrained, etc but with no luck.
Any help is appreciated!
Thank you! | 01-09-2020 11:35:27 | 01-09-2020 11:35:27 | Hi,
Just an update on this issue. I managed to get it work like this:
`model = XLMRobertaForTokenClassification.from_pretrained('./2-out/')`
`tokenizer = XLMRobertaTokenizer.from_pretrained('./2-out/')`
`nlp = pipeline('ner',model= model,tokenizer=tokenizer)`
`nlp('blabla').`
The problem is that the output gives labels for individual tokens and not for complete words. This issue was mentioned also [here](https://github.com/huggingface/transformers/issues/2488).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,469 | closed | Add PRETRAINED_INIT_CONFIGURATION to DistilBERT tokenizer | The DistilBERT tokenizer does not make use of `PRETRAINED_INIT_CONFIGURATION`, instead loading BERT's.
This PR fixes this, fixing the issue detailed in #2423. | 01-09-2020 11:15:05 | 01-09-2020 11:15:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2469?src=pr&el=h1) Report
> Merging [#2469](https://codecov.io/gh/huggingface/transformers/pull/2469?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f599623a99b808e3d5926d89cd13237457b9eeba?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2469?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2469 +/- ##
==========================================
+ Coverage 73.23% 73.24% +<.01%
==========================================
Files 87 87
Lines 15003 15005 +2
==========================================
+ Hits 10988 10990 +2
Misses 4015 4015
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2469?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGlzdGlsYmVydC5weQ==) | `100% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2469?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2469?src=pr&el=footer). Last update [f599623...89df3b4](https://codecov.io/gh/huggingface/transformers/pull/2469?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,468 | closed | Error in BertForMaskedLM with add_tokens | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert / BertForMaskedLM
Language I am using the model on (English, Chinese....): bert-base-multilingual-cased
The problem arise when using:
* [X] the official example scripts: (give details)
* [X] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details)
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
I am trying to fine tune the BERT language model using the pretrained bert-base-multilingual-cased tokenizer where I add 22 new tokens. I use the pretrained bert-base-multilingual-cased BertForMaskedLM model and run it all using the run_lm_finetuning train script.
Here is what I do:
```
from transformers import BertForMaskedLM, BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
tokenizer.add_tokens(my_new_tokens_list) #Consisting of 22 new word pieces
model = BertForMaskedLM.from_pretrained(model_name_or_path)
model.resize_token_embeddings(len(tokenizer))
model.to(torch.device(type='cuda'))
from transformers_fromGITHUB.examples import run_lm_finetuning
dataset = run_lm_finetuning.load_and_cache_examples(args, tokenizer, evaluate=False)
global_step, tr_loss = run_lm_finetuning.train(args, train_dataset, model, tokenizer)
```
When I run this last step: `global_step, tr_loss = run_lm_finetuning.train(args, train_dataset, model, tokenizer)` I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "mylib/BERTlm/transformers_fromGITHUB/examples/run_lm_finetuning.py", line 304, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) | 0/1 [00:00<?, ?it/s]
File "mylib/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ | 0/6359 [00:00<?, ?it/s]
result = self.forward(*input, **kwargs)
File "mylib/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_bert.py", line 887, in forward
prediction_scores = self.cls(sequence_output)
File "mylib/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "mylib/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_bert.py", line 459, in forward
prediction_scores = self.predictions(sequence_output)
File "mylib/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "mylib/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_bert.py", line 449, in forward
hidden_states = self.decoder(hidden_states) + self.bias
RuntimeError: The size of tensor a (119569) must match the size of tensor b (119547) at non-singleton dimension 2
```
If I run it all without adding new tokens (skipping `tokenizer.add_tokens(my_new_tokens_list)` and `model.resize_token_embeddings(len(tokenizer))`) all works fine!
Having looked a bit around, the only place there are 119547 tokens, are in the tokenizer.vocab_size - all others are 119569:
```
>>> tokenizer.vocab_size
119547
>>> model.config.vocab_size
119569
>>> model.get_input_embeddings()
Embedding(119569, 768)
```
So can I somehow change the vocab_size in the tokenizer?
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Ubuntu 18.04
* Python version: 3.7.5
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.2 / Git repo master comit f599623
* Using GPU ? Yes
* Distributed or parallel setup ? No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| 01-09-2020 09:17:26 | 01-09-2020 09:17:26 | I also have the same problem with AlbertForMaskedLM.
I have tried all version of the git repo as well as pip installs.
Basically I add tokens
`from transformers import AlbertForMaskedLM, AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
tokenizer.add_tokens(myvocab.get_unique_words_to_add()) #add news words from out corpus not in the spiece model. 37 words in total
model = AlbertForMaskedLM.from_pretrained(model_name_or_path)
model.resize_token_embeddings(len(tokenizer))
model.to(torch.device(type='cuda'))`
...
...
`
I receive the error
> RuntimeError: The size of tensor a (30037) must match the size of tensor b (30000) at non-singleton dimension 2
Environment
OS: Ubuntu 16.04
Python version: 3.6.9
PyTorch version: 1.3.1
PyTorch Transformers version (or branch): All Albert compatible branches and pip installs (2.3 as of last test)
Using GPU ? Yes
Distributed or parallel setup ? yes
Any other relevant information:<|||||>Hi, I've pushed a fix that was just merged in `master`. Could you please try and install from source:
```py
pip install git+https://github.com/huggingface/transformers
```
and tell me if you face the same error?<|||||>Forgot to update this issue - but yes, it now works.
https://github.com/huggingface/transformers/issues/2480#issuecomment-574548989 |
transformers | 2,467 | closed | Add japanese | 01-09-2020 06:39:19 | 01-09-2020 06:39:19 | sorry i made mistake again because i push create pullreq too early. |
|
transformers | 2,466 | closed | GPT-2 XL PyTorch Quantization for use on a Cloud Server | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I want to run a fast (well, relatively) and interactive version of GPT-2 XL on an Ubuntu 18.04 Cloud Server using python. I have no intention of using the model for anything other than giving it a prompt and getting a generated response out of it.
I know that quantized models are usually used for mobile devices, but I want to use it on a server. Using a python script from a [huggingface tutorial](https://towardsdatascience.com/on-device-machine-learning-text-generation-on-android-6ad940c00911), I was able to convert the tensorflow version of GPT-2 small and medium over to `.tflite` files. When I tried to convert GPT-2 Large however, I ran into the same memory error as [here](https://github.com/huggingface/tflite-android-transformers/issues/4). There was an answer to a semi-related [stack overflow](https://stackoverflow.com/a/36358913) question which suggested looping through the data to be quantized, but I couldn't figure out how to apply this method to GPT-2. I suspect it might be able to be done by looping through the decoder layers and merging them afterwards.
In any case, I then moved onto the PyTorch versions of the models (Thank you so much by the way for providing these!). PyTorch recently released support for quantizing models. I've been trying to adapt the [BERT quantization tutorial](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html#apply-the-dynamic-quantization) to GPT-2, but I keep getting the error `RuntimeError: Could not run 'aten::quantize_per_tensor' with arguments from the 'CUDATensorId' backend. 'aten::quantize_per_tensor' is only available for these backends: [CPUTensorId, VariableTensorId].`. Here's a code snippet:
```
def text_generator(
text="",
quiet=False,
nsamples=1,
unconditional=None,
batch_size=-1,
length=-1,
temperature=0.7,
top_k=40,
):
if os.path.exists("bin/gpt2-large-pytorch_model.bin"):
state_dict = torch.load(
"bin/gpt2-large-pytorch_model.bin",
map_location="cpu" if not torch.cuda.is_available() else None,
)
else:
print("Please download gpt2-pytorch_model.bin and/or place in bin folder")
sys.exit()
if batch_size == -1:
batch_size = 1
assert nsamples % batch_size == 0
seed = random.randint(0, 2147483647)
np.random.seed(seed)
torch.random.manual_seed(seed)
torch.cuda.manual_seed(seed)
print("CUDA AVAILABILITY: {}".format(torch.cuda.is_available()))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load Model
enc = get_encoder()
config = GPT2Config(
vocab_size_or_config_json_file=50257,
n_positions=1024,
n_ctx=1024,
n_embd=1280,
n_layer=36,
n_head=20,
layer_norm_epsilon=1e-5,
initializer_range=0.02,
)
model = GPT2LMHeadModel(config)
model = load_weight(model, state_dict)
model.share_memory()
model.to(device)
model.eval()
print(model)
quantized_model = torch.quantization.quantize_dynamic(
model, {torch.nn.Linear}, dtype=torch.qint8
)
print(quantized_model)
```
This program is a slightly modified version of graykode's repo [here](https://github.com/graykode/gpt-2-Pytorch/blob/master/main.py). Is there a way for me to quantize the PyTorch version of GPT-2, or is as of now impossible? | 01-09-2020 05:57:28 | 01-09-2020 05:57:28 | I found that by changing `device = torch.device("cuda" if torch.cuda.is_available() else "cpu")` to `device = torch.device("cpu")` the program was able to continue, except the quantized models are larger for some reason...
| | Old Size | New Size |
|-------------|----------|----------|
| small | 548.1MB | 586.7MB |
| medium | 1.5GB | 1.6GB |
| large | 3.2GB | 3.3GB |
| extra large | 6.4 | 6.5 |
<|||||>In the line where I quantize the model (`quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)`), swapping out `torch.nn.Linear` for `torch.nn.Bilinear` works better, except the file size is still the same as the unquantized model. To that extent, performance is also worse than the unquantized model.
I tried swapping out `qint8` for `float16` but I just got similar results.<|||||>I'm in the same boat, here is my script:
```from __future__ import absolute_import, division, print_function
import logging
import numpy as np
import os
import random
import sys
import time
import torch
from argparse import Namespace
from torch.utils.data import (DataLoader, RandomSampler, SequentialSampler,
TensorDataset)
from tqdm import tqdm
from transformers import (GPT2Config, GPT2Model, GPT2Tokenizer,)
from transformers import glue_compute_metrics as compute_metrics
from transformers import glue_output_modes as output_modes
from transformers import glue_processors as processors
from transformers import glue_convert_examples_to_features as convert_examples_to_features
# Setup logging
logger = logging.getLogger(__name__)
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.WARN)
logging.getLogger("transformers.modeling_utils").setLevel(
logging.WARN) # Reduce logging
print(torch.__version__)
"""We set the number of threads to compare the single thread performance between FP32 and INT8 performance. In the end of the tutorial, the user can set other number of threads by building PyTorch with right parallel backend."""
torch.set_num_threads(1)
print(torch.__config__.parallel_info())
configs = Namespace()
# The output directory for the fine-tuned model.
configs.output_dir = "./pytorch_models/pytorch-openai-transformer-lm/model"
# The model name or path for the pre-trained model.
configs.model_name_or_path = "pytorch_model.bin"
# The maximum length of an input sequence
configs.max_seq_length = 128
configs.task_name = "MRPC".lower()
configs.processor = processors[configs.task_name]()
configs.output_mode = output_modes[configs.task_name]
configs.label_list = configs.processor.get_labels()
configs.model_type = "bert".lower()
configs.do_lower_case = True
# Set the device, batch size, topology, and caching flags.
configs.device = "cpu"
configs.per_gpu_eval_batch_size = 8
configs.n_gpu = 0
configs.local_rank = -1
configs.overwrite_cache = False
# Set random seed for reproducibility.
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
set_seed(42)
model = GPT2Model.from_pretrained(configs.output_dir)
model.to(configs.device)
quantized_model = torch.quantization.quantize_dynamic(
model, dtype=torch.qint8
)
quantized_output_dir = configs.output_dir + "quantized/"
if not os.path.exists(quantized_output_dir):
os.makedirs(quantized_output_dir)
quantized_model.save_pretrained(quantized_output_dir)
```<|||||>Could you surround your code in triple tick marks to make your code more readable?
Microsoft has apparently open sourced a [distilled variant of GPT-2](https://github.com/microsoft/DialoGPT) designed for conversations. It's based off of Huggingface's work [here](https://github.com/huggingface/transfer-learning-conv-ai) and has the option of being trained in FP16, which sounds promising.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Were you able to get a quantized version of GPT-2?<|||||>I wasn't. Turns out there is a operation that is not supported by tensorflow yet. I don't remember what because it was a time ago. Gave up on the project. Sorry if this isn't very useful. Just updating.<|||||>I unfortunately wasn't able to create/find a quantized model either. I just ended up using the full XL Model instead.<|||||>I managed to quantize Pytorch GPT-2 XL to int8 with `quantize_torch_model` method from [this script](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/quantize_helper.py). As easy as:
```model = QuantizeHelper.quantize_torch_model(model)```
After that I observed 4x speedup on CPU (and changes in predicted scores).
You might also want converting it to torchscript with
```inference = torch.jit.trace(model, input_ids)```
You can find the complete usage example [here](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/benchmark.py). <|||||>Thank you for updating. I perused the code and I saw a condition which requires CPU for INT8 and GPU for FP16. Is the INT8 model runnable on GPU? <|||||>@omeysalvi I didn't test in8 version on GPU, but these benchmarks even avoid this combination, so I guess it is not something very promising to run int8 on GPU.<|||||>@omeysalvi just tried int8 on GPU — doesn't work. Error:
`RuntimeError: Could not run 'quantized::linear_dynamic' with arguments from the 'CUDATensorId' backend. 'quantized::linear_dynamic' is only available for these backends: [CPUTensorId].`
I guess fp16 is generally recommended for optimizing GPT2 on GPU.<|||||>Hi @klimentij. Thanks for the method.
I have tried it with the following code, and it works very well
```
model = model_class.from_pretrained(model_name_or_path)
model = QuantizeHelper.quantize_torch_model(model)
model.to(device)
```
but if I try to save the quantized model and reload it by
```
model.save_pretrained(quantized_model_path)
model = model_class.from_pretrain(quantized_model_path)
model.to(device)
```
the saved qunatized model size is about half of the initial model as it only quantized Conv1D/Linear layer.
But the quantized model generated very strange results which has none sense...
Do you know any possible reason?
<|||||>@carter54 for my purposes int8 generation quality was not acceptable (but not nonsense, more like from GPT2-small), so I didn't even try to save it. If you're okay with generation quality after quantization, I'd try saving it using other means (e.g. torch native saving or even pickling), avoiding `save_pretrained`.<|||||>@klimentij Thanks mate, I will have a try.<|||||>@carter54 I also run into the same problem, the model is doing well before saved and loaded with `save_pretrained`. Inference using saved and loaded quantized model gives 50% less F1 score. Have you tried what @klimentij suggested? Mind to share the result? Thanks in advance.<|||||>I had issues with klimentij's suggestion but I solved it by extracting the `conv1d_to_linear` functions. I had to load a previous model into a pretrained version of GPT2 so ignore that part if you don't have to do it.
```python
def _conv1d_to_linear(module):
in_size, out_size = module.weight.shape
linear = torch.nn.Linear(in_size, out_size)
linear.weight.data = module.weight.data.T.contiguous()
linear.bias.data = module.bias.data
return linear
def conv1d_to_linear(model):
"""in-place
This is for Dynamic Quantization, as Conv1D is not recognized by PyTorch, convert it to nn.Linear
"""
for name in list(model._modules):
module = model._modules[name]
if isinstance(module, Conv1D):
linear = _conv1d_to_linear(module)
model._modules[name] = linear
else:
conv1d_to_linear(module)
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2-xl")
text = "Test Text."
tokens = tokenizer(text, return_tensors="pt")["input_ids"]
model = torch.load("../model.pt")
model.resize_token_embeddings(len(tokenizer))
model.eval()
pretrained_model = GPT2LMHeadModel.from_pretrained("gpt2-xl", torchscript=True)
pretrained_model.resize_token_embeddings(len(tokenizer))
pretrained_model.load_state_dict(model.state_dict())
pretrained_model.eval()
conv1d_to_linear(pretrained_model)
quantized_model = torch.quantization.quantize_dynamic(
pretrained_model, {torch.nn.Linear}, dtype=torch.qint8
)
traced_model = torch.jit.trace(quantized_model, tokens)
torch.jit.save(traced_model, "quantized_traced_model.pt")
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
print("Size (MB):", os.path.getsize("temp.p") / 1e6)
os.remove("temp.p")
print_size_of_model(pretrained_model)
print_size_of_model(quantized_model)
```<|||||>> I managed to quantize Pytorch GPT-2 XL to int8 with `quantize_torch_model` method from [this script](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/quantize_helper.py). As easy as:
> `model = QuantizeHelper.quantize_torch_model(model)`
>
> After that I observed 4x speedup on CPU (and changes in predicted scores).
>
> You might also want converting it to torchscript with
> `inference = torch.jit.trace(model, input_ids)`
>
> You can find the complete usage example [here](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/benchmark.py).
Hi @klimentij,
I am able to use the QuantizeHelper class to convert Conv1D to Linear layer (in `DistilGPT2`) and thereby using it to quantize.
The problem I face now is, them the last `lm.head` layer is being quantized it converts
`lm_head.weight`
to
`lm_head.scale
lm_head.zero_point
lm_head._packed_params.weight
lm_head._packed_params.bias`
Now, the quantized params in the layer `lm_head._packed_params.bias` is just `None`.
What shall be done in this case?<|||||>following<|||||>> @carter54 I also run into the same problem, the model is doing well before saved and loaded with `save_pretrained`. Inference using saved and loaded quantized model gives 50% less F1 score. Have you tried what @klimentij suggested? Mind to share the result? Thanks in advance.
I also tried to quantize the GPT2 model but the generated text is really not good. I did some research and find the quantization aware training could be a solution, but it requires implementing the GPT2 model from scratch. |
transformers | 2,465 | closed | Fix Tokenizer.from_pretrained `raise OSError` | `raise` before OSError seems to be forgotten. | 01-09-2020 05:03:34 | 01-09-2020 05:03:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2465?src=pr&el=h1) Report
> Merging [#2465](https://codecov.io/gh/huggingface/transformers/pull/2465?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f599623a99b808e3d5926d89cd13237457b9eeba?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2465?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2465 +/- ##
=======================================
Coverage 73.23% 73.23%
=======================================
Files 87 87
Lines 15003 15003
=======================================
Hits 10988 10988
Misses 4015 4015
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2465?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `87.56% <ø> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2465?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2465?src=pr&el=footer). Last update [f599623...c217821](https://codecov.io/gh/huggingface/transformers/pull/2465?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,464 | closed | How to run the "run_lm_finetuning.py" with my own corpus? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hello! I am a phD student in Beijing Normal University. I'm trying to further pre-train the model "bert-base-chinese" with my own corpus using the "run_lm_finetuning.py". However in the Examples, there is only an example using WikiText-2. If I use my corpus, what format should my data file has ? Thank you ! | 01-09-2020 04:11:46 | 01-09-2020 04:11:46 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,463 | closed | How to use GPU to do inference ? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
When I use pretrained model to infer, how can I use GPU?
```
tokenizer = XLNetTokenizer.from_pretrained('your-folder-name')
model = XLNetModel.from_pretrained('your-folder-name')
inputs = torch.tensor([tokenizer.encode("你好GitHub!")])
states = model(inputs)[0][0]
```
Like the code above, If I wanna use GPU when `states = model(inputs)[0][0]`, What should I do ? | 01-09-2020 02:07:54 | 01-09-2020 02:07:54 | Ok, I find the way. Just do it like the naive Pytorch code. |
transformers | 2,462 | closed | TF2 version of Multilingual DistilBERT throws an exception [TensorFlow 2] | ## 🐛 Bug
I'm finding that several of the TensorFlow 2.0 Sequence Classification models don't seem to work. Case in point: `distilbert-base-uncased` works but `distilbert-base-multilingual-cased` does not.
My environment is:
* Platform Linux-4.15.0-65-generic-x86_64-with-Ubuntu-18.04-bionic
* Python 3.6.8 (default, Oct 7 2019, 12:59:55)
* [GCC 8.3.0]
* Tensorflow 2.0.0
Note that I am using v2.3.0 of `transformers` with patch [1efc208](https://github.com/huggingface/transformers/commit/1efc208ff386fb6df56302c8f6f9484ddf93b92a) applied to work around [this issue](https://github.com/huggingface/transformers/issues/2251).
However, problems with `distilbert-base-multilingual-cased` occur in v2.2.0, as well.
Here is code to reproduce the problem.
```
# define constants
MODEL_NAME = 'distilbert-base-multilingual-cased' # DOES NOT WORK
# MODEL_NAME = 'distilbert-base-uncased' # WORKS if uncommented
BATCH_SIZE=6
MAX_SEQ_LEN = 500
# imports and setup
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
os.environ["CUDA_VISIBLE_DEVICES"]="0";
import tensorflow as tf
from transformers import glue_convert_examples_to_features
from transformers import BertConfig, TFBertForSequenceClassification, BertTokenizer
from transformers import XLNetConfig, TFXLNetForSequenceClassification, XLNetTokenizer
from transformers import XLMConfig, TFXLMForSequenceClassification, XLMTokenizer
from transformers import RobertaConfig, TFRobertaForSequenceClassification, RobertaTokenizer
from transformers import DistilBertConfig, TFDistilBertForSequenceClassification, DistilBertTokenizer
from transformers import AlbertConfig, TFAlbertForSequenceClassification, AlbertTokenizer
TRANSFORMER_MODELS = {
'bert': (BertConfig, TFBertForSequenceClassification, BertTokenizer),
'xlnet': (XLNetConfig, TFXLNetForSequenceClassification, XLNetTokenizer),
'xlm': (XLMConfig, TFXLMForSequenceClassification, XLMTokenizer),
'roberta': (RobertaConfig, TFRobertaForSequenceClassification, RobertaTokenizer),
'distilbert': (DistilBertConfig, TFDistilBertForSequenceClassification, DistilBertTokenizer),
'albert': (AlbertConfig, TFAlbertForSequenceClassification, AlbertTokenizer),
}
def classes_from_name(model_name):
name = model_name.split('-')[0]
return TRANSFORMER_MODELS[name]
# setup model and tokenizer
(config_class, model_class, tokenizer_class) = classes_from_name(MODEL_NAME)
tokenizer = tokenizer_class.from_pretrained(MODEL_NAME)
model = model_class.from_pretrained(MODEL_NAME)
# construct binary classification dataset
categories = ['alt.atheism', 'comp.graphics']
from sklearn.datasets import fetch_20newsgroups
train_b = fetch_20newsgroups(subset='train',
categories=categories, shuffle=True, random_state=42)
test_b = fetch_20newsgroups(subset='test',
categories=categories, shuffle=True, random_state=42)
print('size of training set: %s' % (len(train_b['data'])))
print('size of validation set: %s' % (len(test_b['data'])))
print('classes: %s' % (train_b.target_names))
x_train = train_b.data
y_train = train_b.target
x_test = test_b.data
y_test = test_b.target
train_csv = [(i, text, y_train[i]) for i, text in enumerate(x_train)]
valid_csv = [(i, text, y_test[i]) for i, text in enumerate(x_test)]
def convert_to_tfdataset(csv):
def gen():
for ex in csv:
yield {'idx': ex[0],
'sentence': ex[1],
'label': str(ex[2])}
return tf.data.Dataset.from_generator(gen,
{'idx': tf.int64,
'sentence': tf.string,
'label': tf.int64})
trn = convert_to_tfdataset(train_csv)
val = convert_to_tfdataset(valid_csv)
# preprocess datasets
train_dataset = glue_convert_examples_to_features(examples=trn, tokenizer=tokenizer
, max_length=MAX_SEQ_LEN, task='sst-2'
, label_list =['0', '1'])
valid_dataset = glue_convert_examples_to_features(examples=val, tokenizer=tokenizer
, max_length=MAX_SEQ_LEN, task='sst-2'
, label_list =['0', '1'])
train_dataset = train_dataset.shuffle(len(train_csv)).batch(BATCH_SIZE).repeat(-1)
valid_dataset = valid_dataset.batch(BATCH_SIZE)
# train model
opt = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=opt, loss=loss, metrics=[metric])
history = model.fit(train_dataset, epochs=1, steps_per_epoch=len(train_csv)//BATCH_SIZE,
validation_data=valid_dataset, validation_steps=len(valid_csv)//BATCH_SIZE)
```
The code above produces the following error:
```
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
529 'Expected to see ' + str(len(names)) + ' array(s), '
530 'but instead got the following list of ' +
--> 531 str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
532 elif len(names) > 1:
533 raise ValueError('Error when checking model ' + exception_prefix +
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 8 array(s), but instead got the following list of 1 arrays: [<tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=int64>]...
```
However, if you set MODEL_NAME to `distilbert-base-uncased`, everything works.
Other models that I've found do not work in TF2 include `xlnet-base-cased`. To reproduce, set MODEL_NAME to `xlnet-base-cased` in the code above. The `xlnet-base-cased` model also throws an exception during the call to `model.fit`.
| 01-08-2020 22:52:20 | 01-08-2020 22:52:20 | THe same error happens to me with the `distilbert-base-multilingual-cased`<|||||>Hello !
I got the same error. After having investigated a bit, I found that the error is because the field `output_hidden_states` in the configuration file of the model `distilbert-base-multilingual-cased` is set to `true` instead of `false`. As a workaround you can do:
```
config = DistilBertConfig.from_pretrained("distilbert-base-multilingual-cased", output_hidden_states=False)
model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-multilingual-cased", config=config)
```
And it will works.
@julien-c or @LysandreJik maybe it would be better to update the config file in the S3 repo, what do you think? In order to be aligned with the other models.<|||||>Hi, thank you all for raising this issue and looking into it. As @jplu mentioned, this was an issue with the `output_hidden_states` in the configuration files. It was the case for two different checkpoints: `distilbert-base-multilingual-cased` and `distilbert-base-german-cased`.
I've updated the files on S3 and could successfully run the your script @amaiya. <|||||>Thanks @jplu and @LysandreJik
Works great now:
```python
# construct toy text classification dataset
categories = ['alt.atheism', 'comp.graphics']
from sklearn.datasets import fetch_20newsgroups
train_b = fetch_20newsgroups(subset='train',
categories=categories, shuffle=True, random_state=42)
test_b = fetch_20newsgroups(subset='test',
categories=categories, shuffle=True, random_state=42)
x_train = train_b.data
y_train = train_b.target
x_test = test_b.data
y_test = test_b.target
# train with ktrain interface to transformers
import ktrain
from ktrain import text
t = text.Transformer('distilbert-base-multilingual-cased', maxlen=500, classes=train_b.target_names)
trn = t.preprocess_train(x_train, y_train)
val = t.preprocess_test(x_test, y_test)
model = t.get_classifier()
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)
learner.fit_onecycle(3e-5, 1)
```
```
begin training using onecycle policy with max lr of 3e-05...
Train for 178 steps, validate for 118 steps
178/178 [==============================] - 51s 286ms/step - loss: 0.2541 - accuracy: 0.8816 - val_loss: 0.0862 - val_accuracy: 0.9746
``` |
transformers | 2,461 | closed | For Hugging Face transformer's hidden_states output, is the first hidden state tensor that is returned the out of the embeddings? | According to the Hugging Face Transformer documentation for the GPT2DoubleHeadsModel (under the 'output' section)
```
hidden_states: (optional, returned when config.output_hidden_states=True)
list of torch.FloatTensor (one for the output of each layer + the output of the embeddings)
```
So in this case, would the first hidden_states tensor (index of 0) that is returned be the output of the embeddings, or would the very last hidden_states tensor that is returned be the output of the embeddings?
I am confused about the order in which the hidden_states tensors are returned, because the documentation seem to indicate that the output of the embeddings is the last hidden_state tensor that is returned.
Thank you,
| 01-08-2020 17:13:24 | 01-08-2020 17:13:24 | Indeed, the documentation might be misleading in that regard. The first value is the embedding output, every following value is the result of the preceding value being passed through an additional layer. I'll update the documentation shortly.<|||||>@LysandreJik So will output.hidden_states[-1] be the output of the last hidden layer (right before LM head)? |
transformers | 2,460 | closed | Fine-tuning pretrained BERT model using own dataset but with same training task | ## ❓ Questions & Help
I would like to finetune a pretrained model using the same task as the original model was trained on, so this means that I want the model to predict masked words and do next sentence prediction. Is there anywhere some code snippet that achieves this or gives an idea on how I can implement this?
| 01-08-2020 16:03:56 | 01-08-2020 16:03:56 | Here is very barebone but working example. It does not have next sentence prediction code but it will work for masked language model:
```python
import numpy as np
import tensorflow as tf
from transformers import *
MODEL = 'distilbert-base-uncased'
model = TFDistilBertForMaskedLM.from_pretrained(MODEL)
tokenizer = DistilBertTokenizer.from_pretrained(MODEL)
sent = tokenizer.encode('people lost their jobs to ai')
sent = np.array([sent])
inpx = sent.copy()
inpx[0][1] = tokenizer.vocab['[MASK]'] # Replace people with mask token
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()
# Try to overfit model for single example
for _ in range(10):
with tf.GradientTape() as g:
out, = model(inpx)
loss_value = loss_object(y_true=sent, y_pred=out)
gradients = g.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
print(loss_value.numpy())
print('>', tokenizer.decode(model(inpx)[0].numpy()[0].argmax(-1)))
```
You will have to handle proper loss masking and other things like warmup etc.<|||||>@stefanknegt I have the same question...Now I am trying to implement this according to the tutorial "Language model fine-tuning" based on `run_lm_finetuning.py` in https://github.com/huggingface/transformers/blob/master/examples/README.md. Maybe it works......
<|||||>@JiangYanting 哈哈别的问题里看到过你,老哥考试考完了啊,这模型能直接做NSP和MLM么<|||||>@TLCFYBJJHYYSND 哈哈哈幸会!好像进一步pre training还是不行……用run_lm_finetuning.py,照着example里的例子做,还是要报错“ValueError: num_samples should be a positive integeral value, but got num_samples=0”<|||||>@JiangYanting 我这一直报这个错,老哥有没有遇到过呀
RuntimeError: CUDA error: device-side assert triggered
<|||||>@TLCFYBJJHYYSND 这个error倒是没遇到过,不过可以看一看这篇博客,不知有无帮助? https://blog.csdn.net/Geek_of_CSDN/article/details/86527107<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,459 | closed | Update pipelines.py | Modified QA pipeline to consider all features for each example before generating topk answers.
Current pipeline only takes one SquadExample, one SquadFeature, one start logit list, one end logit list to retrieve the answer, this is not correct as one SquadExample can produce multiple SquadFeatures. | 01-08-2020 15:42:49 | 01-08-2020 15:42:49 | Hi @Perseus14, thanks for your contribution :).
I took the liberty to apply black formatting so that tests are happy.
Looks good to me 👍 <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2459?src=pr&el=h1) Report
> Merging [#2459](https://codecov.io/gh/huggingface/transformers/pull/2459?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/16ce15ed4bd0865d24a94aa839a44cf0f400ef50?src=pr&el=desc) will **increase** coverage by `0.14%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2459?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2459 +/- ##
==========================================
+ Coverage 73.24% 73.39% +0.14%
==========================================
Files 87 87
Lines 15001 15005 +4
==========================================
+ Hits 10988 11013 +25
+ Misses 4013 3992 -21
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2459?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2459/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `69.03% <100%> (+0.35%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2459/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `88% <0%> (+0.16%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2459/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `35.97% <0%> (+6.6%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2459?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2459?src=pr&el=footer). Last update [16ce15e...0d6c17f](https://codecov.io/gh/huggingface/transformers/pull/2459?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Ok great, thanks @Perseus14 @mfuntowicz! |
transformers | 2,458 | closed | Update QA pipeline | Modified QA pipeline to consider all features for each example before generating topk answers.
Current pipeline only takes one SquadExample, one SquadFeature, one start logit list, one end logit list to retrieve the answer, this is not correct as one SquadExample can produce multiple SquadFeatures. | 01-08-2020 15:34:22 | 01-08-2020 15:34:22 | |
transformers | 2,457 | closed | New SQuAD API for distillation script | The squad distillation script is still using methods from files that do not exist anymore (utils_squad and utils_squad_evaluate).
I updated the script to use the newer API. | 01-08-2020 15:16:13 | 01-08-2020 15:16:13 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2457?src=pr&el=h1) Report
> Merging [#2457](https://codecov.io/gh/huggingface/transformers/pull/2457?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/16ce15ed4bd0865d24a94aa839a44cf0f400ef50?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2457?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2457 +/- ##
=======================================
Coverage 73.24% 73.24%
=======================================
Files 87 87
Lines 15001 15001
=======================================
Hits 10988 10988
Misses 4013 4013
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2457?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2457?src=pr&el=footer). Last update [16ce15e...8eaea4e](https://codecov.io/gh/huggingface/transformers/pull/2457?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,456 | closed | Adding usage example with Tensorflow | Simple training and fine-tuning example of DistilBERT in a Colab. | 01-08-2020 14:50:05 | 01-08-2020 14:50:05 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,455 | closed | ROBERTa model wrong padding for token_type_ids field if return_tensors=True | ## 🐛 Bug
<!-- Important information -->
Model I am using ROBERTa
ROBERTa model wrong padding for token_type_ids field if return_tensors=True.
Language I am using the model on English:
The problem arise when using:
* [ ] the official example scripts: (give details)
* [* ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ *] my own task or dataset: (give details)
## To Reproduce
Please run following code
```
from transformers import pipeline, AutoModel, AutoTokenizer
import torch
model_name = 'roberta-base'
tokenizer = AutoTokenizer.from_pretrained(model_name)
corpus = ['this is a test', 'this is another test example', 'one']
toks = tokenizer.batch_encode_plus(corpus, add_special_tokens=True, max_length=128)
print(toks)
encoded = model(**{k:v.cuda() for k, v in toks.items()}) #crash will be here
```
Steps to reproduce the behavior:
1. Run code.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```
RuntimeError Traceback (most recent call last)
<ipython-input-14-7ba1420d7b7f> in <module>
----> 1 encoded = model(**{k:v.cuda() for k, v in toks.items()})
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask)
733 head_mask = [None] * self.config.num_hidden_layers
734
--> 735 embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds)
736 encoder_outputs = self.encoder(embedding_output,
737 attention_mask=extended_attention_mask,
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
68 token_type_ids=token_type_ids,
69 position_ids=position_ids,
---> 70 inputs_embeds=inputs_embeds)
71
72
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
188 token_type_embeddings = self.token_type_embeddings(token_type_ids)
189
--> 190 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
191 embeddings = self.LayerNorm(embeddings)
192 embeddings = self.dropout(embeddings)
RuntimeError: CUDA error: device-side assert triggered
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Crash happens because to convert lists to tensors it makes padding with value 1.
`padded_value = [v + [self.pad_token_id if key == 'input_ids' else 1] * (max_seq_len - len(v)) for v in padded_value]`
It's probably wrong strategy for BERT like models for field `token_type_ids` where 1 means next sentence token.
It might be wrong logic for attention_mask also because it should be 0 for non meaningful tokens. You should use return_attention_masks which is not enabled by default and also crashes on my machine.
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-ede112fd760a> in <module>
1 corpus = ['this is a test', 'this is another test example', 'one']
----> 2 toks = tokenizer.batch_encode_plus(corpus, add_special_tokens=True, max_length=128, return_attention_masks=True, return_tensors='pt')
3 toks
~/anaconda3/lib/python3.6/site-packages/transformers/tokenization_utils.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, max_length, stride, truncation_strategy, return_tensors, return_input_lengths, return_attention_masks, **kwargs)
971 if return_attention_masks:
972 if is_tf_available():
--> 973 batch_outputs['attention_mask'] = tf.abs(batch_outputs['attention_mask'] - 1)
974 else:
975 batch_outputs['attention_mask'] = torch.abs(batch_outputs['attention_mask'] - 1)
~/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/util/dispatch.py in wrapper(*args, **kwargs)
178 """Call target, and fall back on dispatchers if there is a TypeError."""
179 try:
--> 180 return target(*args, **kwargs)
181 except (TypeError, ValueError):
182 # Note: convert_to_eager_tensor currently raises a ValueError, not a
~/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/ops/math_ops.py in abs(x, name)
273 """
274 with ops.name_scope(name, "Abs", [x]) as name:
--> 275 x = ops.convert_to_tensor(x, name="x")
276 if x.dtype.is_complex:
277 return gen_math_ops.complex_abs(x, Tout=x.dtype.real_dtype, name=name)
~/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in convert_to_tensor(value, dtype, name, preferred_dtype, dtype_hint)
1182 preferred_dtype = deprecation.deprecated_argument_lookup(
1183 "dtype_hint", dtype_hint, "preferred_dtype", preferred_dtype)
-> 1184 return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
1185
1186
~/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in convert_to_tensor_v2(value, dtype, dtype_hint, name)
1240 name=name,
1241 preferred_dtype=dtype_hint,
-> 1242 as_ref=False)
1243
1244
~/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx, accept_composite_tensors)
1294
1295 if ret is None:
-> 1296 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1297
1298 if ret is NotImplemented:
~/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
284 as_ref=False):
285 _ = as_ref
--> 286 return constant(v, dtype=dtype, name=name)
287
288
~/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py in constant(value, dtype, shape, name)
225 """
226 return _constant_impl(value, dtype, shape, name, verify_shape=False,
--> 227 allow_broadcast=True)
228
229
~/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
233 ctx = context.context()
234 if ctx.executing_eagerly():
--> 235 t = convert_to_eager_tensor(value, ctx, dtype)
236 if shape is None:
237 return t
~/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
94 dtype = dtypes.as_dtype(dtype).as_datatype_enum
95 ctx.ensure_initialized()
---> 96 return ops.EagerTensor(value, ctx.device_name, dtype)
97
98
ValueError: Attempt to convert a value (tensor([[-1, -1, -1, -1, -1, -1, 0],
[-1, -1, -1, -1, -1, -1, -1],
[-1, -1, -1, 0, 0, 0, 0]])) with an unsupported type (<class 'torch.Tensor'>) to a Tensor.
```
## Environment
* OS: Ubuntu 18.04
* Python version: Python 3.6.5 :: Anaconda, Inc.
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ? Yes
* Distributed or parallel setup ? No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| 01-08-2020 14:15:54 | 01-08-2020 14:15:54 | I've been able to fix the `return_attention_masks` error manually defining
`tokenization_utils.is_tf_available = lambda: False`
It seems that tf2.0 can enable `_tf_available` in src/transformers/file_utils.py,
which triggers the problematic branch (second stack in the second trace)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,454 | closed | Add XLM-RoBERTa model for TF2 | Hello,
I have implemented the XLM-RoBERTa model handling for Tensorflow 2. | 01-08-2020 13:26:11 | 01-08-2020 13:26:11 | There is a little incompatibility between isort and black apparently https://github.com/psf/black/issues/251<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2454?src=pr&el=h1) Report
> Merging [#2454](https://codecov.io/gh/huggingface/transformers/pull/2454?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d87eafd118739a4c121d69d7cff425264f01e1c?src=pr&el=desc) will **increase** coverage by `0.6%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2454?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2454 +/- ##
=========================================
+ Coverage 74.51% 75.11% +0.6%
=========================================
Files 87 88 +1
Lines 14920 14945 +25
=========================================
+ Hits 11117 11226 +109
+ Misses 3803 3719 -84
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2454?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/2454/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.8% <100%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2454/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100% <100%> (ø)` | |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2454/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.72% <0%> (+27.54%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2454?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2454?src=pr&el=footer). Last update [9d87eaf...bb1aa06](https://codecov.io/gh/huggingface/transformers/pull/2454?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>does it work even if xlm-roberta-large is pretrained pytorch model? i mean do we need to convert pytorch model to tensorflow?<|||||>@jplu I took the liberty of updating the documentation to the new format directly on your fork. Thank you for your contribution, this is awesome! |
transformers | 2,453 | closed | Installation of Transformers without Sacremoses | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi HuggingFace Team!
I was checking the dependencies of this library, and I found that sacremoses does not have an accepted licence type for my system. The setup.py file says that it's needed for XLM. If I don't plan on using XLM would I be able to modify the setup.py and remove the sacremoses requirement?
Thanks!
Zander
| 01-08-2020 13:23:45 | 01-08-2020 13:23:45 | I commented sacramoses it out in the setup.py and installed it, everything worked as designed! As long as I don't use XLM<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Having optional GPL code in a widely used package is an issue. If fixing it is as simple as commenting it out in the setup, couldn't there be a way to make that available through some variant, so as not to taint other open source packages?<|||||>`sacremoses` seems to have been licensed under MIT since https://github.com/alvations/sacremoses/pull/92 though? |
transformers | 2,452 | closed | Remove redundant hidden states | The quickstart showcasing the usage of the Model2Model currently fails. This is due to a positional argument that should be a named argument.
As I understand it, the `encoder_hidden_states` are already present in the `kwargs_decoder` dictionary, there is therefore no need to pass it to the decoder forward call.
With the current quickstart example this crashes as the position of the `encoder_hidden_states` means it's passed as an `attention_mask`.
Please correct me if I'm wrong @rlouf @thomwolf | 01-08-2020 12:56:06 | 01-08-2020 12:56:06 | |
transformers | 2,451 | closed | Add check for token_type_ids before tensorizing | Fix an issue where `prepare_for_model()` gives a `KeyError` when
`return_token_type_ids` is set to `False` and `return_tensors` is
enabled. | 01-08-2020 12:33:33 | 01-08-2020 12:33:33 | Great, that looks good to me! |
transformers | 2,450 | closed | Error when running run_generation.py | I tried to run this code:
python ./examples/run_generation.py --model_type=gpt2 --length=20 --model_name_or_path=gpt2
However I am getting the error below:
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\GPT2\\.cache\\torch\\transformers\\tmpy2recb0u' -> 'C:\\Users\\GPT2\\.cache\\torch\\transformers\\f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./examples/run_generation.py", line 237, in <module>
main()
File "./examples/run_generation.py", line 200, in main
tokenizer = tokenizer_class.from_pretrained(args.model_name_or_path)
File "C:\gpt2\venv\lib\site-packages\transformers\tokenization_utils.py", line 309, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\gpt2\venv\lib\site-packages\transformers\tokenization_utils.py", line 415, in _from_pretrained
raise EnvironmentError(msg)
OSError: Couldn't reach server at '{}' to download vocabulary files.
How do I get over this hump? Thanks
| 01-08-2020 10:33:11 | 01-08-2020 10:33:11 | It seems to me that there is either another program that has a lock on the GPT-2 file or that you can't access our S3. Does the error still happen if you restart your machine?<|||||>yes. I restarted several times but the issue persist<|||||>Seems to be a file lock issue. Can't rename a file because it's being used.
See here:
https://github.com/huggingface/transformers/blob/f599623a99b808e3d5926d89cd13237457b9eeba/src/transformers/file_utils.py#L392
Related #2385<|||||>Ok this should be solved on master now that #2384 is merged |
transformers | 2,449 | closed | Evaluation not working on distilbert-base-uncased-distilled-squad | ## 🐛 Bug
<!-- Important information -->
Model I am using DistilBert: distilbert-base-uncased-distilled-squad
Language I am using the model on English:
The problem arise when using:
* [x] the official example scripts: run_squad.py in examples
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQuAD 1.1 and SQuAD2.0 dev dataset
## To Reproduce
Steps to reproduce the behavior:
1. python run_squad.py --model_type distilbert --model_name_or_path distilbert-base-uncased-distilled-squad --do_eval --do_lower_case --predict_file $SQUAD_DIR/dev-v2.0.json --max_seq_length 384 --doc_stride 128 --output_dir ./distill_squad/ --per_gpu_eval_batch_size=4 --version_2_with_negative
2. python run_squad.py --model_type distilbert --model_name_or_path distilbert-base-uncased-distilled-squad --do_eval --do_lower_case --predict_file $SQUAD_DIR/dev-v1.1.json --max_seq_length 384 --doc_stride 128 --output_dir ./distill_squad/ --per_gpu_eval_batch_size=4
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
Results on evaluation data
```python
{
"exact": 80.4177545691906,
"f1": 84.07154997729623,
"total": 11873,
"HasAns_exact": 76.73751686909581,
"HasAns_f1": 84.05558584352873,
"HasAns_total": 5928,
"NoAns_exact": 84.0874684608915,
"NoAns_f1": 84.0874684608915,
"NoAns_total": 5945
}
```
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: CentOS Linux
* Python version: 3.6.9
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU: yes
* Distributed or parallel setup: None
* Any other relevant information:
## Additional context
```code
01/08/2020 08:41:52 - INFO - __main__ - ***** Running evaluation *****
01/08/2020 08:41:52 - INFO - __main__ - Num examples = 10833
01/08/2020 08:41:52 - INFO - __main__ - Batch size = 4
Evaluating: 0%| | 0/2709 [00:00<?, ?it/s]
Traceback (most recent call last):
File "run_squad.py", line 815, in <module>
main()
File "run_squad.py", line 804, in main
result = evaluate(args, model, tokenizer, prefix=global_step)
File "run_squad.py", line 323, in evaluate
outputs = model(**inputs)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'token_type_ids'
```
<!-- Add any other context about the problem here. -->
I just changed the model to bert and model_name to bert-base-uncased. It is working fine. I think there is some problem with distilbert model. Can you please help me on this? | 01-08-2020 08:52:12 | 01-08-2020 08:52:12 | Hello, thanks for raising this issue! This should have been fixed with 16ce15e, can you let me know if it fixes your issue?<|||||>Hi @LysandreJik, thanks for fixing the issue on such short notice. Yes, now it's working. |
transformers | 2,448 | closed | Tokenizer methods and padding | ## ❓ Questions & Help
I wanted to know whether there was a perticular reason why the `get_special_tokens_mask` method of the tokenizer does not also return as mask over padding tokens, only <CLS> and <SEP> tokens, in the case where `already_has_special_tokens=True` ? I had to rewrite a custom function for my usecase, but it seemed off.
Also, I think there should be an additional `padding` kwarg in the method, which if provided would return a longer mask then the sum of lenfgths of `token_ids_0` and `token_ids_1`, in the case where `already_has_special_tokens=False`. The same should be true for `build_inputs_with_special_tokens` IMO. What do yoy think ?
| 01-08-2020 07:59:20 | 01-08-2020 07:59:20 | I have no strong opinion about this. Wdyt @LysandreJik?
Related to this though, this is how I'm proposing to mask the padding tokens in Masked language modeling batches in the `run_lm_finetuning` script: https://github.com/huggingface/transformers/pull/2570/commits/55939b5707066f612b0b2390787b325d30af728c#diff-713f433a085810c3d63a417486e56a88R205-R206<|||||>Since you are already caching the encoded examples, I think you can do: `batch_encode_plus.(..., pad_to_max_length=True)` in both Dataset's `__init__`, instead of repeating this for each epoch. This will also get rid of the introduced `collate_fn` logic you introduce.
Regarding the issue, I just think it's surprising `get_special_tokens_mask` does not consider padding tokens as special tokens, requiring them to be handled separately, for instance as you did.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,447 | closed | Reproducibility problem with DistilBERT paper | ## ❓ Questions & Help
We are currently working a follow-up to your work “DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter”. We’ve noticed some skeptical data in Table 1 of your paper. On MRPC, the reported averaged F1 and acc result is 90.2, which is even ~2% higher than BERT-base (teacher). We carefully reproduce your experiment with your code and pretrained checkpoint provided in huggingface/transformers on Github. Our reproduced result is 89.6/85.5, which means the averaged F1 and acc should be 87.55, which is very different from your reported result. With all due respect, we personally think you may have mistakenly report the F1 score instead of averaged F1 & acc. Another evidence is your previous blog (https://user-images.githubusercontent.com/16107619/64210993-c0ef1b80-ce72-11e9-806b-171313e8ae9e.png) and DistilRoBERTa, which has a much lower MRPC score of 86.6 (https://github.com/huggingface/transformers/tree/master/examples/distillation). We list your reported results and our reproduced results and reproduced results on GLUE dev set:
DistillBERT on GLUE Dev Set | CoLA | MNLI-m | MNLI-mm | MRPC | QNLI | QQP | RTE | SST-2 | STS-B
-- | -- | -- | -- | -- | -- | -- | -- | -- | --
DistilBERT Blog | 42.5 | 81.6 | 81.1 | 85.35(88.3/82.4) | 85.5 | 89.15(87.7/90.6) | 60.0 | 92.7 | 84.75(84.5/85.0)
DistilBERT paper | 49.1 | 81.8 | | 90.2 | 90.2 | 89.2 | 62.9 | 92.7 | 90.7
Our reproduced | 43.0 | - | - | 87.55(89.6/85.5) | 85.8 | - | - | - | 80.53(80.6/80.5)
According to our experiment, the result is actually very close to the previous results you reported on your blog. We are not able to reproduce results reported in your paper though we have tried some hyperparameter tuning. We will really appreciate it if you can confirm the result in your paper or send us the hyperparameters to reproduce the results.
| 01-08-2020 07:38:07 | 01-08-2020 07:38:07 | Does anyone have the same problem here? |
transformers | 2,446 | closed | RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows. | ## ❓ Questions & Help
I am receiving the error RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows.
What can I do to increase this source sentence length constraint?
<img width="1316" alt="Screen Shot 2020-01-08 at 2 01 49 pm" src="https://user-images.githubusercontent.com/44693666/71954228-8ed63780-321f-11ea-84ef-eac4519235c4.png">
| 01-08-2020 06:03:41 | 01-08-2020 06:03:41 | Different models have different sequence lengths. Some models don't, like XLNet and TransformerXL.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I have the same error.
I found out that it is because the BERT model only handles up to 512 characters, so if your texts are longer, I cannot make embeddings. There are different ways to handle this, and one is e.g. to make a sliding window of the embeddings, and then take the average embedding for words in overlapping windows.
<|||||>Quick reminder: Limit of 512 is not word limit, it is token length limit as BERT models do not use words as tokens. You always have more tokens than number of words.
You can divide the text into half and then pool afterwards even though this is not exactly the same as having the whole thing and then pooling.<|||||>Related to this: Using tokenizer.encode_plus(doc) gives a sensible warning:
`Token indices sequence length is longer than the specified maximum sequence length for this model (548 > 512). Running this sequence through the model will result in indexing errors`
But tokenizer.batch_encode_plus doesn't seem to output this warning. Are other people noticing this?<|||||>Hi All,
I am running a Roberta Model for predicting the sentence classification task. I am using Fastai implementation of it. I get a similar error as mentioned above. Please help me resolve this.

<|||||>Check if you texts are longer than 512 characters, and if so the error is
expected.
Solutions:
1. Only use the first 512 characters of each text.
2. Divide you texts into chunks of 512 characters and make embeddings on
each chunk
On Wed, 6 May 2020, 19:07 Shravan Koninti, <[email protected]> wrote:
> Hi All,
>
> I am running a Roberta Model for predicting the sentence classification
> task. I am using Fastai implementation of it. I get a similar error as
> mentioned above. Please help me resolve this.
>
> [image: fast_er_1]
> <https://user-images.githubusercontent.com/6191291/81206700-1d01d500-8fea-11ea-8964-86298ad231cd.JPG>
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/2446#issuecomment-624772966>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ADCAJ7S77GPU3D7XZ57Z2X3RQGKNPANCNFSM4KEDJF3Q>
> .
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,445 | closed | Error occurs in XLMRobertaModel when token_type_ids is given. | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): XLMRoberta
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* the official example scripts: (give details)

The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
```
>>> import torch
>>> from transformers import XLMRobertaModel
>>> model = XLMRobertaModel.from_pretrained('xlm-roberta-base', cache_dir="cache_dir")
>>> input_ids = torch.tensor([[0, 164, 100231, 135758, 32, 2, 2, 157, 217, 164, 10869, 5, 2]])
>>> outputs = model(input_ids)
>>> outputs[0].size()
torch.Size([1, 13, 768])
>>>
>>> token_type_ids = torch.tensor([[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]])
>>> outputs = model(input_ids, token_type_ids=token_type_ids)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/bering/anaconda3/envs/torch1.3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/bering/anaconda3/envs/torch1.3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 735, in forward
embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds)
File "/home/bering/anaconda3/envs/torch1.3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/bering/anaconda3/envs/torch1.3/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 70, in forward
inputs_embeds=inputs_embeds)
File "/home/bering/anaconda3/envs/torch1.3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 188, in forward
token_type_embeddings = self.token_type_embeddings(token_type_ids)
File "/home/bering/anaconda3/envs/torch1.3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/bering/anaconda3/envs/torch1.3/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/bering/anaconda3/envs/torch1.3/lib/python3.6/site-packages/torch/nn/functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range: Tried to access index 1 out of table with 0 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Ubuntu 18.04
* Python version: 3.6
* PyTorch version: 1.3
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ?: no
* Distributed or parallel setup ?: no
* Any other relevant information:
| 01-08-2020 05:59:18 | 01-08-2020 05:59:18 | XLMRobertaModel does not support token types > 0.
If you look at the embedding you will see that there is only a single value. Basically the model does not rely on this embedding to understand when a sentence end. I think they included it only for API compatibility<|||||>@andompesta Thank you very much! :) |
transformers | 2,444 | closed | Update | 本家のupdateを取り込みました。 | 01-08-2020 05:31:03 | 01-08-2020 05:31:03 | I'm sorry I made mistake.
I just want to pullreq to our branch which forks from yours.
So, I close this pullreq. |
transformers | 2,443 | closed | porting XLM-Roberta to tensorflow 2.0 | ## ❓ Questions & Help
Yesterday I have ported XLM-Roberta from pytorch to tensorflow mainly following the instruction provided in [huggingface/from-tensorflow-to-pytorch](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28).
The final error is computed using the DUMMY_INPUT as input values for the large MaskedLM model and is evaluated on the final prediction_score output.
I compute the error as:
```python
max_absolute_diff = np.amax(np.abs(tf_model_out.numpy() - pt_model_out.detach().numpy()))
```
and the final output is 0.00027179718; which is lower than the suggested 1e^-3 bound.
According to your indication the error seems to be acceptable, given that XLM-Roberta is a huge model. However, I have experienced some huge output difference when I do not specify the position_ids. That is, the position_ids computed by the TFRobertaEmbeddings seems to be correct since they correctly take in consideration the presence of some pad tokens using the ``create_position_ids_from_input_ids`` function. Instead the PyTorch RobertaEmbeddings doesn't.
Moreover, I'm also wandering if it is possible to merge the interface of the TF models with the PyTorch models. Not sure if it is worth it, but by using the __call__ and call function provided by TF2.0 it is possible to obtain an equivalent interface between the 2 frameworks.
For example:
```python
class TFXLMRobertaForMaskedLM(TFXLMRobertaPreTrainedModel):
def __call__(self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, **kwargs):
inputs = (input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds)
return super(TFXLMRobertaForMaskedLM, self).__call__(inputs, **kwargs)
def call(self, inputs, **kwargs):
outputs = self.xlm_roberta(*inputs, **kwargs)
sequence_output = outputs[0]
prediction_scores = self.lm_head(sequence_output)
outputs = (prediction_scores,) + outputs[2:]
return outputs # prediction_scores, (hidden_states), (attentions)
```
should be equivalent to the PyTorch implementation | 01-08-2020 03:46:51 | 01-08-2020 03:46:51 | Hi @andompesta hard to say without looking at the code – did you check out this related PR by @jplu : #2443 |
transformers | 2,442 | closed | loss_fct = CrossEntropyLoss(ignore_index=-1) for BERT/RoBERTa MaksedLM | ## 🐛 Bug
<!-- Important information -->
The models I am using (Bert, RoBERTa....):
Language I am using the model on (English):
The problem arise when I tried to fine-tune the model using `MaskedLM` given the `masked_lm_labels`:
It seems that the model forward loop specifies that `loss_fct = CrossEntropyLoss(ignore_index=-1)` where the instructions previously stated masked ids are -100. This gives a "device-side assert triggered " error for GPU training and "Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /pytorch/aten/src/THNN/generic/ClassNLLCriterion.c:97" for CPU training.
* [modeling_bert.py / modeling_roberta.py ] the official example scripts: for `RobertaForMaskedLM` / `BertForMaskedLM` we have `loss_fct = CrossEntropyLoss(ignore_index=-1)`
* [ ] my own modified scripts: set ignore_index = -100 or simply remove it
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. run the "run_lm_finetuning.py" file in the examples
2. It seems that if we use `pip install transformers` and get transformers 2.3.0 We would have this error. If installing from source code, the error is gone
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Linux 18.04.3
* Python version: 3.6
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ? yes
* Distributed or parallel setup ? nope
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| 01-08-2020 02:55:12 | 01-08-2020 02:55:12 | Hello! This is due to the pull request #2130. I believe you're running the examples with transformers 2.3.0 whereas they're maintained to work with the current master branch. Please install the library from master:
```pip install git+https://github.com/huggingface/transformers```
in order to get the examples working with the source code.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,441 | closed | is pytorch-pretrained-bert still being maintained in the future? | ## 📚 Migration
<!-- Important information -->
Model I am using (Bert, XLNet....):
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [ ] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
Details of the issue:
<!-- A clear and concise description of the migration issue. If you have code snippets, please provide it here as well. -->
## Environment
* OS:
* Python version:
* PyTorch version:
* PyTorch Transformers version (or branch):
* Using GPU ?
* Distributed or parallel setup ?
* Any other relevant information:
## Checklist
- [ ] I have read the migration guide in the readme.
- [ ] I checked if a related official extension example runs on my machine.
## Additional context
<!-- Add any other context about the problem here. -->
| 01-07-2020 23:57:14 | 01-07-2020 23:57:14 | Hi, `pytorch-pretrained-BERT` is the name of this library as it was a year ago. It has since evolved into `pytorch-transformers` and now `transformers`. It is the same library.<|||||>Hi thank you for your reply. But my question really is that I'm now using apis from pytorch-pretrained-BERT directly and will this library be maintained under new release (new python release, bug fixed, etc)?
The reason is that I found some discrepancies between apis from pytorch-pertrained-BERT library and transformers library and the old one (from pytorch-pretrained-BERT) gave better results so I'm sticking with that library.<|||||>No updates will be done to the `pytorch-pretrained-BERT`, no bug fixes either. It is deprecated. It will remain on pip however.
Would you mind sharing where the `pytorch-pretrained-BERT` package gave better results so that we may investigate this? Thank you.<|||||>yes i'm encountering performance drop with sequence classification tasks, same as issues described in this thread: https://github.com/huggingface/transformers/issues/938.<|||||>@LysandreJik There seem to be quite a few posts that highlight this difference in performance. It is quite alarming but I'm not sure if it is worth investigating because it might be impossible or improbable to solve.
https://github.com/huggingface/transformers/issues/938
https://github.com/huggingface/transformers/issues/931
https://github.com/UKPLab/sentence-transformers/issues/50
https://github.com/huggingface/transformers/issues/2441<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,440 | closed | DistilBertForSequenceClassification returning nans | ## DistilBertForSequenceClassification returning NaNs
<!-- A clear and concise description of the question. -->
DistilBertForSequenceClassification using the distilbert-base-uncased is returning Nans for both the logits and loss. Has anyone encountered this issue? | 01-07-2020 23:30:09 | 01-07-2020 23:30:09 | I'm facing this issue. How did you resolve this?<|||||>Me too, and how did you resolve this problem? |
transformers | 2,439 | closed | Generating text with fine-tuned TFGPT2LMHeadModel in python. | I've finetuned GPT2 using the following script:
```
from transformers import GPT2Tokenizer, TFGPT2LMHeadModel
import tensorflow as tf
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2")
tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2")
file_path = 'text.txt'
with open(file_path, encoding="utf-8") as f:
text = f.read()
tokenized_text = tokenizer.encode(text)
examples = []
block_size = 100
for i in range(0, len(tokenized_text) - block_size + 1, block_size): # Truncate in block of block_size
examples.append(tokenized_text[i:i + block_size])
inputs, labels = [], []
for ex in examples:
inputs.append(ex[:-1])
labels.append(ex[1:])
dataset = tf.data.Dataset.from_tensor_slices((inputs, labels))
BATCH_SIZE = 8
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=[loss, *[None] * model.config.n_layer], metrics=[metric])
model.fit(dataset, epochs=20)
```
This runs fine, and after 20 epochs I have an accuracy of ~0.59.
The problem comes when I tried to write my own text generation script:
```
def generate_text(model, tokenizer, start_string, num_generate):
input_eval = tf.expand_dims(tokenizer.encode(start_string), 0)
token_ids = []
for i in range(num_generate):
predictions = tf.squeeze(model.predict(input_eval)[0], 0)
predicted_id = tf.random.categorical(predictions, 1)[-1, 0].numpy().item()
input_eval = tf.expand_dims([predicted_id], 0)
token_ids.append(predicted_id)
return start_string + tokenizer.decode(token_ids)
```
I get output, but the output is of a sufficiently lower quality than when I train a model using "run_lm_finetuning.py" and generate text using "run_generation.py"
I looked into the example generation script, and it looks like there is simply a call to "model.generate(...)"
Where does this model.generate() method exist? | 01-07-2020 16:35:18 | 01-07-2020 16:35:18 | Did you resolve this? I think on the current commit `generate()` still doesn't exist.<|||||>There is no TensorFlow implementation for the `generate()` method yet. We're working on it, but in the meantime, you could do your own generation loop or use a PyTorch model with the `generate()` method.<|||||>I need to change to `BATCH_SIZE = 12` in the above or else this example code will not run. There would be a dimension mismatch with `BATCH_SIZE = 8` |
transformers | 2,438 | closed | Fix typograpical errors | Fixed few typos. | 01-07-2020 14:48:37 | 01-07-2020 14:48:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2438?src=pr&el=h1) Report
> Merging [#2438](https://codecov.io/gh/huggingface/transformers/pull/2438?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fb2ab869c6894ea05df97a1372ac9e016ec9c662?src=pr&el=desc) will **decrease** coverage by `0.17%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2438?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2438 +/- ##
==========================================
- Coverage 73.24% 73.06% -0.18%
==========================================
Files 87 87
Lines 15001 15001
==========================================
- Hits 10988 10961 -27
- Misses 4013 4040 +27
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2438?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `83.26% <100%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `25% <0%> (-7.15%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `66.37% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.53% <0%> (-1.59%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `87.19% <0%> (-0.65%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2438?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2438?src=pr&el=footer). Last update [fb2ab86...58ca488](https://codecov.io/gh/huggingface/transformers/pull/2438?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, thanks @gentaiscool ! |
transformers | 2,437 | closed | Add CamemBERT model for TF2 | Hello,
Here another contribution :) I have implemented the CamemBERT model handling for Tensorflow 2.
I now have the model on my disk, should I send it to you? Or will you generate it from your side? Or should I upload it on my account? As you wish :)
Best.
Julien. | 01-07-2020 14:41:00 | 01-07-2020 14:41:00 | Humm I don't understand why this test is failing I haven't touched to DistilBERT...
```=================================== FAILURES ===================================
______________ TFDistilBertModelTest.test_pt_tf_model_equivalence ______________
[gw2] linux -- Python 3.5.9 /usr/local/bin/python
self = <tests.test_modeling_tf_distilbert.TFDistilBertModelTest testMethod=test_pt_tf_model_equivalence>
def test_pt_tf_model_equivalence(self):
if not is_torch_available():
return
import torch
import transformers
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
for model_class in self.all_model_classes:
pt_model_class_name = model_class.__name__[2:] # Skip the "TF" at the beggining
pt_model_class = getattr(transformers, pt_model_class_name)
config.output_hidden_states = True
tf_model = model_class(config)
pt_model = pt_model_class(config)
# Check we can load pt model in tf and vice-versa with model => model functions
tf_model = transformers.load_pytorch_model_in_tf2_model(tf_model, pt_model, tf_inputs=inputs_dict)
pt_model = transformers.load_tf2_model_in_pytorch_model(pt_model, tf_model)
# Check predictions on first output (logits/hidden-states) are close enought given low-level computational differences
pt_model.eval()
pt_inputs_dict = dict(
(name, torch.from_numpy(key.numpy()).to(torch.long)) for name, key in inputs_dict.items()
)
with torch.no_grad():
pto = pt_model(**pt_inputs_dict)
tfo = tf_model(inputs_dict, training=False)
tf_hidden_states = tfo[0].numpy()
pt_hidden_states = pto[0].numpy()
tf_hidden_states[np.isnan(tf_hidden_states)] = 0
pt_hidden_states[np.isnan(pt_hidden_states)] = 0
max_diff = np.amax(np.abs(tf_hidden_states - pt_hidden_states))
# Debug info (remove when fixed)
if max_diff >= 2e-2:
print("===")
print(model_class)
print(config)
print(inputs_dict)
print(pt_inputs_dict)
> self.assertLessEqual(max_diff, 2e-2)
E AssertionError: 2.3126152 not less than or equal to 0.02
tests/test_modeling_tf_common.py:125: AssertionError```<|||||>@jplu It's an unrelated Heisenbug.
@thomwolf For some reason the debug prints were not printed :(<|||||>Ok I thought it was coming from me ahah
@thomwolf I let you check, do not hesitate to ping me if I have to do something from my side.<|||||>@jplu Here too, I took the liberty of updating the documentation directly on your fork. Thank you very much for your contributions, this is great!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2437?src=pr&el=h1) Report
> Merging [#2437](https://codecov.io/gh/huggingface/transformers/pull/2437?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b5625f131ddc55ec1620270aac3e38ea170e5708?src=pr&el=desc) will **increase** coverage by `0.25%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2437?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2437 +/- ##
==========================================
+ Coverage 74.34% 74.59% +0.25%
==========================================
Files 88 89 +1
Lines 14945 14971 +26
==========================================
+ Hits 11111 11168 +57
+ Misses 3834 3803 -31
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2437?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/2437/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.83% <100%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2437/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100% <100%> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2437/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.82% <0%> (+0.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2437/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `85.69% <0%> (+0.81%)` | :arrow_up: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2437/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.46% <0%> (+2.27%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2437/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `69.6% <0%> (+16.66%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2437?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2437?src=pr&el=footer). Last update [b5625f1...b955f53](https://codecov.io/gh/huggingface/transformers/pull/2437?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,436 | closed | Added repetition penalty to PPLM example | It was giving awful results, so I added repetition penalty which improved things. | 01-07-2020 14:11:31 | 01-07-2020 14:11:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2436?src=pr&el=h1) Report
> Merging [#2436](https://codecov.io/gh/huggingface/transformers/pull/2436?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/74755c89b92e0c0c027221c13fd034afed4d2136?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2436?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2436 +/- ##
=======================================
Coverage 73.24% 73.24%
=======================================
Files 87 87
Lines 14989 14989
=======================================
Hits 10979 10979
Misses 4010 4010
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2436?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2436?src=pr&el=footer). Last update [74755c8...fcfb816](https://codecov.io/gh/huggingface/transformers/pull/2436?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>what do you think @w4nderlust @mimosavvy?<|||||>[IWillPull here, writing from a personal acc]
Do not merge yet.
I think it's best to explain in the help text that this was not in the original paper and change the default value to 1.0 so it doesn't influence anything by default.<|||||>Thank you for your time reviewing this.
May I ask, why does the code quality fail? What did I miss?<|||||>> if before you were getting awful results it's likely because of sub-optimal parameter choices, as we obtained good results without the need for the repetition penalty.
Could you share your optimal parameters?<|||||>> May I ask, why does the code quality fail? What did I miss?
Can you run `make style` as indicated in the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md)?<|||||>> > if before you were getting awful results it's likely because of sub-optimal parameter choices, as we obtained good results without the need for the repetition penalty.
>
> Could you share your optimal parameters?
The ones we reported on the paper work in most cases, but for some BOWs others may be better because of the size of the BOW and also the specific words contained in it (if they are really common or less common), but in general the reported ones are pretty consistent.
For the discriminators,it's a bit trickier as each of them is a bit its own thing, so I would suggest to start from the reported parameters for the discriminator and play a bit around using the suggestions of what kind of impact you could expect from each parameter that we reported in the paper, until you are happy.<|||||>> > May I ask, why does the code quality fail? What did I miss?
>
> Can you run `make style` as indicated in the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md)?
@julien-c Thank you. I missed reading the guidelines before doing this PR, should I do a new one with proper branching?<|||||>LGTM, thanks!<|||||>@julien-c it didn't look entirely good to me. I explained my argument, that goes beyond repetition penalty for PPLM and is a general argument about repetition penalty (so applies to CTRL too) here: https://github.com/huggingface/transformers/pull/2303#issuecomment-572273727<|||||>Aarg I misunderstood your comment then @w4nderlust, I'll ask for more explicit greenlight next time!
@IWillPull can you please open a new PR to fix/improve remaining points? Thanks!<|||||>No problem @julien-c ! The repetition penalty as it is implemented in this PR is fine in the sense that it works exactly like the CTRL one and that worked for people so far.
What I think is that we should have a wider conversation including you, me, Thomas, Patrick and ideally also Nitish and Bryan from Salesforce about the best way to implement it for negative values *my suggestion is in the comment I linked, but it would be cool to have consensus about it).
I will send Nitish and Bryan an email, let's see what they think about it.<|||||>@julien-c Sure!
I will just wait for your (@w4nderlust and others) consensus as to not to make a mess of this.<|||||>@IWillPull
> > if before you were getting awful results it's likely because of sub-optimal parameter choices, as we obtained good results without the need for the repetition penalty.
>
> Could you share your optimal parameters?
The GPT-2 LM itself, and the discriminators are different from what is reported in the paper. I think you need ~1.5 times the step-size/iterations for this version of GPT-2 LM/attribute models and other parameters should work as is.
If you are using the GPT-2 LM from the paper (which corresponds to a previous version of the Huggingface GPT-2 LM) and the discriminators from the paper, the listed parameters in the Appendix work quite well. Code/models for what's in the paper --> https://github.com/uber-research/PPLM/tree/master/paper_code
Also if repetition is a huge-problem, Table S19 from the paper might be relevant. I think this be an easy to fix help with the "awful" repetitions. Also, repetitions don't seem to be an issue if you're using the discriminator -- so I think a large part of the problem lies with the simple "BoW" loss as opposed to the decoding scheme. |
transformers | 2,435 | closed | update the config.is_decoder=True before initialize the decoder | Currently the PreTrainedEncoderDecoder class fails to initialize the "cross-attention layer" since it updates decoder.config.is_decoder = True after decoder initialization. | 01-07-2020 14:02:30 | 01-07-2020 14:02:30 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2435?src=pr&el=h1) Report
> Merging [#2435](https://codecov.io/gh/huggingface/transformers/pull/2435?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9261c7f771fccfa2a2cb78ae544adef2f6eb402b?src=pr&el=desc) will **decrease** coverage by `<.01%`.
> The diff coverage is `25%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2435?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2435 +/- ##
==========================================
- Coverage 73.24% 73.24% -0.01%
==========================================
Files 87 87
Lines 15001 15004 +3
==========================================
+ Hits 10988 10989 +1
- Misses 4013 4015 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2435?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `25.58% <25%> (+0.28%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2435?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2435?src=pr&el=footer). Last update [9261c7f...b4418d3](https://codecov.io/gh/huggingface/transformers/pull/2435?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Indeed, the cross attention is initialized in `BertLayer` and needs knowledge of the `is_decoder` boolean to ensure it is correctly initialized.
Looks good to me, thanks @zlinao <|||||>> Indeed, the cross attention is initialized in `BertLayer` and needs knowledge of the `is_decoder` boolean to ensure it is correctly initialized.
>
> Looks good to me, thanks @zlinao
Yes, exactly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,434 | closed | spelling correction | 01-07-2020 13:59:04 | 01-07-2020 13:59:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2434?src=pr&el=h1) Report
> Merging [#2434](https://codecov.io/gh/huggingface/transformers/pull/2434?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/176d3b30798fce556613da31c698d31cfdfd02aa?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2434?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2434 +/- ##
=======================================
Coverage 73.24% 73.24%
=======================================
Files 87 87
Lines 15001 15001
=======================================
Hits 10988 10988
Misses 4013 4013
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2434?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2434?src=pr&el=footer). Last update [176d3b3...7bce837](https://codecov.io/gh/huggingface/transformers/pull/2434?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, thanks @orena1 ! |
|
transformers | 2,433 | closed | make test problem | ## ❓ Questions & Help
Hello all,
I recently installed this library/module and wanted to run a test with: `make test`
However, things do not go right and I got the following message:
> python -m pytest -n auto --dist=loadfile -s -v ./tests/
> /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python: No module named pytest
> make: *** [test] Error 1
I'm using macOS X Catalina, with Python3.7 (3.7.5), pytest is installed.
(I have no clue why it returns an error on Py2.7 )
Thanks in advance
| 01-07-2020 13:38:07 | 01-07-2020 13:38:07 | |
transformers | 2,432 | closed | Fix misleading RoBERTa token type ids | RoBERTa does not actually make use of token type ids. When feeding the output of `encode_plus` used with a pair of sequences to the model directly, it crashes as it cannot handle token type ids that have a value of 1.
This fix returns a list of zeros as the token type ids instead. | 01-07-2020 12:47:48 | 01-07-2020 12:47:48 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2432?src=pr&el=h1) Report
> Merging [#2432](https://codecov.io/gh/huggingface/transformers/pull/2432?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9261c7f771fccfa2a2cb78ae544adef2f6eb402b?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2432?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2432 +/- ##
=======================================
Coverage 73.24% 73.24%
=======================================
Files 87 87
Lines 15001 15001
=======================================
Hits 10988 10988
Misses 4013 4013
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2432?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2432?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2432?src=pr&el=footer). Last update [9261c7f...7e3feb9](https://codecov.io/gh/huggingface/transformers/pull/2432?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>>
>
> RoBERTa does not actually make use of token type ids. When feeding the output of `encode_plus` used with a pair of sequences to the model directly, it crashes as it cannot handle token type ids that have a value of 1.
>
> This fix returns a list of zeros as the token type ids instead.
I encountered the same problem. Thank you for your solution, I figure out what's wrong with my problem now. |
transformers | 2,431 | closed | How can I fine-tune XLM for sentence classification? | ## ❓ Questions & Help
I am using the `XLMTokenizer` and `XLMForSequenceClassification` for fine-tuning the `xlm-mlm-en-2048` model to work on a sentence classification problem.
I am using the same configuration as the one that I have used for fine-tuning BERT.
Surprisingly, XLM seems not to be improving at all (The loss is decreasing but the accuracy isn't affected!).
Actually, the model has overfitted to always select the dominant class in the dataset!
```
EPOCH 0:
Iteration: 0. Loss: 1.0213737487792969. Accuracy: 66.17283950617283%
Iteration: 1000. Loss: 0.9081503748893738. Accuracy: 66.29629629629629%
EPOCH 1:
Iteration: 0. Loss: 0.6950288414955139. Accuracy: 66.29629629629629%
Iteration: 1000. Loss: 0.648954451084137. Accuracy: 66.29629629629629%
EPOCH 2:
Iteration: 0. Loss: 0.7168332934379578. Accuracy: 66.29629629629629%
Iteration: 1000. Loss: 0.38551628589630127. Accuracy: 66.29629629629629%
```
The function used to tokenize a sentence is:
```
def prepare_features(tokenizer, seq_1, max_seq_length = 100,
zero_pad = True, include_CLS_token = True, include_SEP_token = True):
## Tokenzine Input
tokens_a = tokenizer.tokenize(seq_1)
## Truncate
if len(tokens_a) > max_seq_length - 2:
tokens_a = tokens_a[0:(max_seq_length - 2)]
## Initialize Tokens
tokens = []
if include_CLS_token:
tokens.append(tokenizer.cls_token)
## Add Tokens and separators
for token in tokens_a:
tokens.append(token)
if include_SEP_token:
tokens.append(tokenizer.sep_token)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
## Input Mask
input_mask = [1] * len(input_ids)
## Zero-pad sequence length
if zero_pad:
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
return torch.tensor(input_ids).squeeze(0), input_mask
```
What do you advise me to do in order to investigate this strange result?
Thanks,
Amr | 01-07-2020 12:46:36 | 01-07-2020 12:46:36 | First of, your learning rate might be too low but even then it is odd to see the exact same accuracy all the time. You'll have to have a look at your dataset. Are all your classes in both training, validation, and test set? Are some classes weighted? It's quite hard to help with this.
Also, accuracy is a crude measure. Have a look at how your f1 evolves over time. IIRC sklearn has an utility to also calculate a "test report" where you can see how well all classes are predicted. Might be worth investigating too. <|||||>> First of, your learning rate might be too low but even then it is odd to see the exact same accuracy all the time. You'll have to have a look at your dataset. Are all your classes in both training, validation, and test set? Are some classes weighted? It's quite hard to help with this.
>
> Also, accuracy is a crude measure. Have a look at how your f1 evolves over time. IIRC sklearn has an utility to also calculate a "test report" where you can see how well all classes are predicted. Might be worth investigating too.
Hmm, The learning rate is the default `1e-5`. I am sure the classes are available in the training and validation datasets.
Since the model is overfitting, sklearn generates a warning message that f1 score is ill-defined since the model always predict 0.
This extreme over-fitting seems strange to me, I will actually try lowering down the learning rate.
Here is the Google Colab Notebook url in case you want to have a look: https://colab.research.google.com/drive/1VLt_a-lxLdibYFGnZFDncm1Ib28es57A
Thanks :smile: <|||||>Some thing: batch_size seems rather small but not so much that it could explain the issue. You do have a big difference in input data length (median of 37 and max of 165 tokens), and with a small batch size this may not average well. I'm not sure if it's standard practice to evaluate every 1000 steps in the training loop (I'd evaluate each epoch after all training data has been seen, at least for a dataset this size), but also that won't explain the problem.
You can try printing out the accuracy during training, too, and see if it is overfitting. <|||||>I am having a similar problem with my own data. It was predicting the majority class all the time. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I have the same problem when I finetune XLM for two class sentence classification task. not only does it predict the majority class all the time, but also gives the exactly same probablity for different cases ! Does anyone find a solution to that? |
transformers | 2,430 | closed | T5_INPUTS_DOCSTRING correct!? | Is this docstring even correct?
The [CLS] and [SEP] Tokens do not appear in the from google provided dictionaries for the pretrained "t5-base" model. If this is true could you then please provide a correct example of how to use the text generation feature of this almighty transformer? (BoolQ or QA) That would help me a lot.
Thanks!
```
T5_INPUTS_DOCSTRING = r"""
Inputs:
**input_ids**: ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``:
Indices of input sequence tokens in the vocabulary.
To match pre-training, T5 input sequence should be formatted with [CLS] and [SEP] tokens as follows:
(a) For sequence pairs:
``tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]``
(b) For single sequences:
``tokens: [CLS] the dog is hairy . [SEP]``
T5 is a model with relative position embeddings so you should be able to pad the inputs on
the right or the left.
Indices can be obtained using :class:`transformers.T5Tokenizer`.
See :func:`transformers.PreTrainedTokenizer.encode` and
:func:`transformers.PreTrainedTokenizer.convert_tokens_to_ids` for details.
**attention_mask**: (`optional`) ``torch.FloatTensor`` of shape ``(batch_size, sequence_length)``:
Mask to avoid performing attention on padding token indices.
Mask values selected in ``[0, 1]``:
``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens.
**head_mask**: (`optional`) ``torch.FloatTensor`` of shape ``(num_heads,)`` or ``(num_layers, num_heads)``:
Mask to nullify selected heads of the self-attention modules.
Mask values selected in ``[0, 1]``:
``1`` indicates the head is **not masked**, ``0`` indicates the head is **masked**.
"""
``` | 01-07-2020 12:31:51 | 01-07-2020 12:31:51 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,429 | closed | It occurs error when python run_lm_finetuning.py |
- Environment
> - python 3.6.9
> - torch 1.1.0
> - have installed transformers
- Command
> python run_lm_finetuning.py --output_dir=output --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$TEST_FILE --mlm
- Error
> Traceback (most recent call last):
File "run_lm_finetuning.py", line 498, in <module>
main()
File "run_lm_finetuning.py", line 447, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)
File "run_lm_finetuning.py", line 96, in load_and_cache_examples
dataset = TextDataset(tokenizer, file_path=args.eval_data_file if evaluate else args.train_data_file, block_size=args.block_size)
File "run_lm_finetuning.py", line 78, in __init__
self.examples.append(tokenizer.add_special_tokens_single_sequence(tokenized_text[:block_size]))
**AttributeError: 'BertTokenizer' object has no attribute 'add_special_tokens_single_sequence'**
- Help
> How to deal it?
| 01-07-2020 11:55:09 | 01-07-2020 11:55:09 | What is your version of transformers?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,428 | closed | Padding part output in BERT NER task is not [PAD]? | ## ❓ Questions & Help
Hello ,my friends.
I have a problem while I am trying to do a Chinese NER task.
For easy to understand I will use some English words instead.
Assume padding length =128, here is the sentence:
> [CLS] Marilyn Monroe is an famous actress. [SEP] [PAD] [PAD] ... [PAD]
After I put it into `BertForTokenClassification` , I got a output like:
>[CLS] Marilyn Monroe is an famous actress.
>[CLS] B-PER I-PER O O O O O
That looks good ,but ,when output reached padding area it become strange:
>[SEP] [PAD] [PAD] [PAD] [PAD] [PAD] ... [PAD]
>O O O O O [CLS] [CLS] O ...O
it seems padding area including [SEP] token output randomly (they should be [SEP] and [PAD]), and most of them are 'O' with a littel [CLS] or [B-PER]\[I-PER].
I am confused about that.
I am sure taht:
- I already set `attention_masks` on padding area while training, attention_masks on [PAD] are 0.
- I already set `token_type_ids`, token_type_ids on [PAD] are 1.
and also in eval with `attention_mask` and `token_type_ids`.
So that's what I am depressed about, I have to cut output padding area before I solve this problem, I don't think it's a good idea.
Can someone help me? :( | 01-07-2020 09:47:37 | 01-07-2020 09:47:37 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,427 | closed | ALBERT model does not work as expected | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi,
I am new to Transformers. I tried the example for class AlbertForQuestionAnswering from huggingface.co. The results in each run are different and not correct. Please help.
Thanks,
Tuan Anh | 01-07-2020 09:43:53 | 01-07-2020 09:43:53 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,426 | closed | Make doc regarding masked indices more clear | See this [issue](https://github.com/huggingface/transformers/issues/2418) for details, basically there used to be different ways of specifying masked indices (either -1 or -100), which was fixed by this [commit](https://github.com/huggingface/transformers/commit/418589244d263087f1d48655f621a65f2a5fcba6).
However the doc remains unclear, this PR fixes this.
I had an issue originally because I was using a version which did not incorporate the uniformisation yet. | 01-07-2020 09:14:43 | 01-07-2020 09:14:43 | Fantastic, thanks @r0mainK! |
transformers | 2,425 | closed | Tokenize whole sentence vs. tokenize words in sentence then concat | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
When using En-Fr XLMModel in transformers library,
I found that result from tokenizing a whole sentence is different from tokenizing words in the sentence and then concatenate.
My configuration is as below
**(XLMModel, XLMTokenizer, XLMConfig, 'xlm-mlm-enfr-1024')**
The result is as below

The ultimate goal is to 'detokenize' tokenized sentence which is
['I', 'love', 'swim', '##ing'] -> ['I', 'love', 'swimming']
In order to do this, I have to know the raw token's index for each tokenized tokens.
It would be great if anyone can help with this problem. | 01-07-2020 08:12:50 | 01-07-2020 08:12:50 | Seems like this is a duplicate of https://github.com/huggingface/transformers/issues/2140<|||||>Thank you for mentioning the same issue! |
transformers | 2,424 | closed | convert tf ckpt to pytorch_model.bin, load back model(TFBertModel), will loss params | ```
import os
pretrained_path = 'Models/chinese_L-12_H-768_A-12'
config_path = os.path.join(pretrained_path, 'bert_config.json')
checkpoint_path = os.path.join(pretrained_path, 'bert_model.ckpt.index')
config = BertConfig.from_pretrained(config_path)
model = BertForPreTraining.from_pretrained(checkpoint_path, from_tf=True, config=config)
model.save_pretrained('Models/chinese')
```
INFO:transformers.configuration_utils:Configuration saved in Models/chinese/config.json
INFO:transformers.modeling_utils:Model weights saved in Models/chinese/pytorch_model.bin
The load the save model:
```
# 加载模型
config = BertConfig.from_json_file("Models/chinese/config.json")
tfmodel = TFBertModel.from_pretrained('Models/chinese/',from_pt=True, config=config)
```
INFO:transformers.modeling_tf_utils:loading weights file Models/chinese/pytorch_model.bin
INFO:transformers.modeling_tf_pytorch_utils:Loading PyTorch weights from /home/work/Bert/Models/chinese/pytorch_model.bin
INFO:transformers.modeling_tf_pytorch_utils:PyTorch checkpoint contains 119,108,746 parameters
INFO:transformers.modeling_tf_pytorch_utils:Loaded 102,267,648 parameters in the TF 2.0 model.
INFO:transformers.modeling_tf_pytorch_utils:Weights or buffers not loaded from PyTorch model: {'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias'}
PyTorch checkpoint contains 119,108,746 parameters To Loaded 102,267,648 parameters in the TF 2.0 model.
| 01-07-2020 07:01:38 | 01-07-2020 07:01:38 | Try changing it from TFBERT model to BertModel. Since you already converted it to a pytorch checkpoint. <|||||>@zanderkent
If have any tool can convert tf ckpt to tf_model.h5. So I can use TFBert Class to load.
Because I with use tf2 model.fit to train , and use the tf2 strategy to train dist.<|||||>If I am not mistaken transformers would allow you to use a tf CKPT using TFbert. In your first part of the code you converted it to pytorch. When you initially load the model save it as tensorflow model.
Anyone else have any other ideas?<|||||>@zanderkent
Can you give a demo for that? I still cannot know how to do
Thanks
If I use:
model = TFBertForPreTraining.from_pretrained(checkpoint_path, config=config)
will get the log info
```
INFO:transformers.modeling_tf_utils:loading weights file Models/chinese_L-12_H-768_A-12/bert_model.ckpt.index
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/tracking/util.py:1249: NameBasedSaverStatus.__init__ (from tensorflow.python.training.tracking.util) is deprecated and will be removed in a future version.
Instructions for updating:
Restoring a name-based tf.train.Saver checkpoint using the object-based restore API. This mode uses global names to match variables, and so is somewhat fragile. It also adds new restore ops to the graph each time it is called when graph building. Prefer re-encoding training checkpoints in the object-based format: run save() on the object-based saver (the same one this message is coming from) and use that checkpoint in the future.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/tracking/util.py:1249: NameBasedSaverStatus.__init__ (from tensorflow.python.training.tracking.util) is deprecated and will be removed in a future version.
Instructions for updating:
Restoring a name-based tf.train.Saver checkpoint using the object-based restore API. This mode uses global names to match variables, and so is somewhat fragile. It also adds new restore ops to the graph each time it is called when graph building. Prefer re-encoding training checkpoints in the object-based format: run save() on the object-based saver (the same one this message is coming from) and use that checkpoint in the future.
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-6-3e9de93c3943> in <module>
----> 1 model = TFBertForPreTraining.from_pretrained(checkpoint_path, config=config)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
315 # see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1339-L1357
316 try:
--> 317 model.load_weights(resolved_archive_file, by_name=True)
318 except OSError:
319 raise OSError("Unable to load weights from h5 file. "
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py in load_weights(self, filepath, by_name)
179 raise ValueError('Load weights is not yet supported with TPUStrategy '
180 'with steps_per_run greater than 1.')
--> 181 return super(Model, self).load_weights(filepath, by_name)
182
183 @trackable.no_automatic_dependency_tracking
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/network.py in load_weights(self, filepath, by_name)
1150 if by_name:
1151 raise NotImplementedError(
-> 1152 'Weights may only be loaded based on topology into Models when '
1153 'loading TensorFlow-formatted weights (got by_name=True to '
1154 'load_weights).')
NotImplementedError: Weights may only be loaded based on topology into Models when loading TensorFlow-formatted weights (got by_name=True to load_weights).
```<|||||>Sorry, someone else will have to help you with that. I am only vaguely familiar with this library. <|||||>I get the same error `NotImplementedError: Weights may only be loaded based on topology into Models when loading TensorFlow-formatted weights (got by_name=True to load_weights).` and have no idea what it means!
I'm trying to:
```
model_dir = 'my/dir/to/bert/model'
config = BertConfig.from_json_file(model_dir + '/bert_config.json')
config.num_labels = 14
model = TFBertForSequenceClassification.from_pretrained(model_dir + '/bert_model.ckpt.index', config = config)
```
But this gives me the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mydir/share/virtualenvs/pdBERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 317, in from_pretrained
model.load_weights(resolved_archive_file, by_name=True)
File "/mydir/share/virtualenvs/pdBERTlm-giwpujkO/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 234, in load_weights
return super(Model, self).load_weights(filepath, by_name, skip_mismatch)
File "/mydir/share/virtualenvs/pdBERTlm-giwpujkO/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py", line 1196, in load_weights
'Weights may only be loaded based on topology into Models when '
NotImplementedError: Weights may only be loaded based on topology into Models when loading TensorFlow-formatted weights (got by_name=True to load_weights).
```
If I try to run `BertForSequenceClassification` (with from_tf = True), this error shows:
```
>>> model = BertForSequenceClassification.from_pretrained(model_dir + '/bert_model.ckpt.index', from_tf = True, config = config)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mydir/virtualenvs/pdBERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_utils.py", line 427, in from_pretrained
model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index'
File "/mydir/virtualenvs/pdBERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_bert.py", line 99, in load_tf_weights_in_bert
pointer = getattr(pointer, 'bias')
File "/mydir/virtualenvs/pdBERTlm-giwpujkO/lib/python3.7/site-packages/torch/nn/modules/module.py", line 585, in __getattr__
type(self).__name__, name))
AttributeError: 'BertForSequenceClassification' object has no attribute 'bias'
```
However if I run the `transformers-cli convert` and then load the pytorch model, it all works fine...
```
export BERT_BASE_DIR=my/dir/to/bert/model
transformers-cli convert --model_type bert \
--tf_checkpoint $BERT_BASE_DIR/bert_model.ckpt \
--config $BERT_BASE_DIR/bert_config.json \
--pytorch_dump_output $BERT_BASE_DIR/pytorch_model.bin
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Does someone know how to fix it?
` NotImplementedError: Weights may only be loaded based on topology into Models when loading TensorFlow-formatted weights (got by_name=True to load_weights). ` |
transformers | 2,423 | closed | [DistillBERT] tokenizer issue of multilingual-cased | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): DistilBERT
Language I am using the model on (English, Chinese....): Korean
The problem arise when using:
* [ ] the official example scripts: (give details)
* [x] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
When I tokenize the korean by `transformers.DistilBertTokenizer` with the **bert-base-multilingual-cased** vocab. Every token in korean mapped to [UNK].
```python
from transformers import DistilBertTokenizer
ko_text = "CNP차앤박화장품 역시 국내 대표 ‘피부과 출신’ 화장품 브랜드다. CNP차앤박화장품‘프로폴리스 앰플 오일 인 크림’은 브랜드 베스트셀러인 프로폴리스 에너지 앰플에 오일을 함유해 보습 기능을 강화한 제품이다."
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-multilingual-cased')
print(tokenizer.tokenize(ko_text))
```
Results
['[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '.', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '.']
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
However, when tokenizing the korean by `transformers.BertTokenizer` with the **bert-base-multilingual-cased** vocab returns expected results.
```python
from transformers import BertTokenizer
ko_text = "CNP차앤박화장품 역시 국내 대표 ‘피부과 출신’ 화장품 브랜드다. CNP차앤박화장품‘프로폴리스 앰플 오일 인 크림’은 브랜드 베스트셀러인 프로폴리스 에너지 앰플에 오일을 함유해 보습 기능을 강화한 제품이다."
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
print(tokenizer.tokenize(ko_text))
```
Results
['CN', '##P', '##차', '##앤', '##박', '##화', '##장', '##품', '역시', '국', '##내', '대', '##표', '[UNK]', '피', '##부', '##과', '출', '##신', '[UNK]', '화', '##장', '##품', '브', '##랜드', '##다', '.', 'CN', '##P', '##차', '##앤', '##박', '##화', '##장', '##품', '[UNK]', '프로', '##폴', '##리스', '[UNK]', '오', '##일', '인', '크', '##림', '[UNK]', '은', '브', '##랜드', '베', '##스트', '##셀', '##러', '##인', '프로', '##폴', '##리스', '에', '##너', '##지', '[UNK]', '오', '##일', '##을', '함', '##유', '##해', '보', '##습', '기', '##능을', '강', '##화', '##한', '제', '##품', '##이다', '.']
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Ubuntu 18.04.3 LTS
* Python version: 3.6.8
* PyTorch version: 1.3.0+cu100
* PyTorch Transformers version (or branch): 2.2.2
* Using GPU ? : Not in this issue.
* Distributed or parallel setup ? No
* Any other relevant information: I use the docker image `horovod/horovod:0.18.2-tf2.0.0-torch1.3.0-mxnet1.5.0-py3.6-gpu`
## Additional context
I tested `transformers.DistilBertTokenizer` with the **bert-base-multilingual-cased** vocabs on english text. It returns expected results.
So, It seems that subclassing DistilBertTokenizer from BertTokenizer is the problem.... How can I solve this issue?
<!-- Add any other context about the problem here. -->
| 01-07-2020 05:51:35 | 01-07-2020 05:51:35 | Hi, thanks for raising this issue! This is due to the lower casing parameter which is not correctly initialized for DistilBERT. I'm fixing it in #2469.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,422 | closed | Is any possible for load local model ? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
For some reason(GFW), I need download pretrained model first then load it locally. But I read the source code where tell me below:
```
pretrained_model_name_or_path: either:
- a string with the `shortcut name` of a pre-trained model to load from cache or download, e.g.: ``bert-base-uncased``.
- a string with the `identifier name` of a pre-trained model that was user-uploaded to our S3, e.g.: ``dbmdz/bert-base-german-cased``.
- a path to a `directory` containing model weights saved using :func:`~transformers.PreTrainedModel.save_pretrained`, e.g.: ``./my_model_directory/``.
- a path or url to a `tensorflow index checkpoint file` (e.g. `./tf_model/model.ckpt.index`). In this case, ``from_tf`` should be set to True and a configuration object should be provided as ``config`` argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- None if you are both providing the configuration and state dictionary (resp. with keyword arguments ``config`` and ``state_dict``)
```
I wanna download a pretrained model and load it locally with from_pretrained api, How can I do that? | 01-07-2020 03:13:42 | 01-07-2020 03:13:42 | You can use that third option and use a directory. Alternatively, I think you can also do
```python
model = DistilBertModel(DistilBertConfig())
model.load_state_dict(torch.load(<path>))
```<|||||>Thanks for your advice . I'll have a try!<|||||>I found a solution. If you want use a pretrained model offline, you can download all files of the model. For example, If you wanna use "chinese-xlnet-mid", you can find files in [https://s3.amazonaws.com/models.huggingface.co/](url) like below:

now, you can download all files you need by type the url in your browser like this `https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-xlnet-mid/added_tokens.json`.
Put all this files into a single folder, then you can use this offline.
```
tokenizer = XLNetTokenizer.from_pretrained('your-folder-name')
model = XLNetModel.from_pretrained('your-folder-name')
```
If any one have the same problem, maybe you can try this method. I'll close this issue, Thanks.<|||||>It can be done as the documentation suggests.
Once you've got the pre-trained tokenizer and model loaded the first time via (say for T5):
```
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFAutoModelWithLMHead.from_pretrained("t5-small")
```
You can then save them locally via:
```
tokenizer.save_pretrained('./local_model_directory/')
model.save_pretrained('./local_model_directory/')
```
And then simply load from the directory:
```
tokenizer = AutoTokenizer.from_pretrained('./local_model_directory/')
model = TFAutoModelWithLMHead.from_pretrained('./local_model_directory/')
```
<|||||>> You can use that third option and use a directory. Alternatively, I think you can also do
>
> ```python
> model = DistilBertModel(DistilBertConfig())
> model.load_state_dict(torch.load(<path>))
> ```
Saved my day. I had a custom model deriving from pretrained model class<|||||>Seems for the new version (4.11.3), can load local model as below:
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
model = AutoModelForTokenClassification.from_pretrained('./local_model_directory/')
tokenizer = AutoTokenizer.from_pretrained('./local_model_directory/l')
```<|||||>When I use "huggingface/CodeBERTa-small-v1", the method with
tokenizer = AutoTokenizer.from_pretrained("huggingface/CodeBERTa-small-v1")
model = TFAutoModelWithLMHead.from_pretrained("huggingface/CodeBERTa-small-v1")
then save them locally via:
tokenizer.save_pretrained('./local_model_directory/')
model.save_pretrained('./local_model_directory/')
And then simply load from the directory:
tokenizer = AutoTokenizer.from_pretrained('./local_model_directory/')
model = TFAutoModelWithLMHead.from_pretrained('./local_model_directory/')
This method will make error.
KeyError: 'logits'
When I download "huggingface/CodeBERTa-small-v1" by
git clone https://huggingface.co/huggingface/CodeBERTa-small-v1
(https://gitlost-murali.github.io/blogs/nlp/huggingface/download-huggingface-models)
then load model by:
tokenizer = RobertaTokenizer.from_pretrained('./local_model_directory/')
model = RobertaForMaskedLM.from_pretrained('./local_model_directory/')
OK! |
transformers | 2,421 | closed | [Albert] SentencePiece Error with AlbertTokenizer | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):Albert v2
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [X] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: Sequence Classification
## To Reproduce
Steps to reproduce the behavior:
```
from transformers import AlbertTokenizer
from pyspark.sql import functions as F, types as T
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
# load data into spark
df = spark.read...
# df.columns => ["id", "text"]
# create a function to create the tokens from a supplied text column
def create_tokens(text=None, tokenizer=None):
tokens = ["[CLS]"]
tokens.extend(tokenizer.tokenize(text))
tokens.append("[SEP]")
return tokens
def create_tokens_udf = F.udf(lambda z: create_tokens(z, tokenizer=tokenizer), T.ArrayType(T.StringType()))
# apply the udf to the text
tokenized_df = df.withColumn("tokens", create_tokens_udf(F.column("text")))
# trigger the transformation
tokenized_df.cache().count()
```
The following traceback is observed (i've excluded the Py4J tracebacks for clarity):
```
File "<command-3594705570096092>", line 10, in create_tokens
File "/databricks/python/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 302, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/databricks/python/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 438, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/databricks/python/lib/python3.7/site-packages/transformers/tokenization_albert.py", line 90, in __init__
self.sp_model.Load(vocab_file)
File "/databricks/python/lib/python3.7/site-packages/sentencepiece.py", line 118, in Load
return _sentencepiece.SentencePieceProcessor_Load(self, filename)
RuntimeError: Internal: unk is not defined.
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
The text gets tokenized as expected.
## Environment
* OS: Linux(?)
* Python version: 3.7.3
* PyTorch version: ?
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU N/A
* Distributed or parallel setup: Distributed (Databricks)
* Any other relevant information:
## Additional context
Thanks a million for this amazing library! | 01-07-2020 00:48:48 | 01-07-2020 00:48:48 | Actually, looking more closely, this seems to be a `sentencepiece` issue, right?<|||||>Hi! Is there a way you could isolate where the error happens? Is it when you're initializing the tokenizer with the line `tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")` ?<|||||>Hi! No initialization seems to work fine, it is when I actually attempt to apply the tokenizer in the `create_tokens` function. so:
`tokens.extend(tokenizer.tokenize(text))`<|||||>I also have the same issues for fine-tuning model on my own task:
```python
INFO:tensorflow:loading sentence piece model
I0110 02:56:05.196459 139883558270784 tokenization.py:240] loading sentence piece model
Traceback (most recent call last):
File "run_classifier.py", line 494, in <module>
tf.app.run()
File "/root/anaconda3/envs/jupyterlab/lib/python3.7/site-packages/tensorflow_core/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/root/anaconda3/envs/jupyterlab/lib/python3.7/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/root/anaconda3/envs/jupyterlab/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_classifier.py", line 204, in main
spm_model_file=FLAGS.spm_model_file)
File "/home/vigosser/ALBERT/tokenization.py", line 254, in from_scratch
return FullTokenizer(vocab_file, do_lower_case, spm_model_file)
File "/home/vigosser/ALBERT/tokenization.py", line 241, in __init__
self.sp_model.Load(spm_model_file)
File "/root/anaconda3/envs/jupyterlab/lib/python3.7/site-packages/sentencepiece.py", line 118, in Load
return _sentencepiece.SentencePieceProcessor_Load(self, filename)
RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(73) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
```<|||||>@vigosser I believe this is an issue with SentencePiece itself rather than Transformers. I was looking at the repo for SentencePiece and it is a little confusing; according to this [issue](https://github.com/google/sentencepiece/issues/344) it seems that we shouldn't be using SentencePiece as of this past summer, but instead should use tf.text, but according to that same issue, the integration is not complete.
I also have a feeling this is broken due to something in the TF 2.0 API, but that's not based on anything in particular.
Thoughts @LysandreJik ?<|||||>@jmwoloso this problem happened because of the wrong "spm_model_file"
the command as follow slove the problem
```bash
python run_classifier.py \
--task_name=mail \
--do_predict=true \
--do_train=true \
--do_eval=true \
--spm_model_file=$modelpath/30k-clean.model \
--data_dir=/data \
--vocab_file=$modelpath/30k-clean.vocab \
--albert_config_file=$modelpath/albert_config.json \
--init_checkpoint=$modelpath/model.ckpt-best.index \
--max_seq_length=128 \
--train_batch_size=8 \
--output_dir=/data/output \
--learning_rate=15e-6 \
--num_train_epochs=3.0 \
```<|||||>Glad you found a solution to your issue @vigosser! My issue is different than your though (at least I think it is). I'm using Albert from within my own custom script and just trying to tokenize some text so that I can train on it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@jmwoloso Did you get around to solve your problem?
I have a similar issue when using my trained sentencepiece tokenizer with Albert to train my corpus. |
transformers | 2,420 | closed | Bug Transformers 2.3.0 - ValueError: invalid literal for int() with base 10: 'pytorch' | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): BERT
Language I am using the model on (English, Chinese....): PT-BR (Multilingual)
The problem arise when using:
* [ ] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
CUDA_VISIBLE_DEVICES=2,3 nohup python /home/lucasrodrigues/code/transformers-2.3.0/examples/run_lm_finetuning.py \
--output_dir=/home/lucasrodrigues/train/transformers/output/multi/250k/ \
--model_type=bert \
--model_name_or_path=/home/lucasrodrigues/train/transformers/model-multi-pytorch/ \
--tokenizer_name=/home/lucasrodrigues/train/transformers/model-multi-pytorch/ \
--config_name=/home/lucasrodrigues/train/transformers/model-multi-pytorch/ \
--block_size=510 \
--do_lower_case \
--train_data_file=/home/lucasrodrigues/datasets/nilc/initial/initial_corpus_train.txt \
--eval_data_file=/home/lucasrodrigues/datasets/nilc/initial/initial_corpus_train.txt \
--do_train \
--do_eval \
--evaluate_during_training \
--logging_steps=50 \
--save_steps=50 \
--per_gpu_train_batch_size=2 \
--per_gpu_eval_batch_size=2 \
--mlm \
> logs/transformers_multi_250k.txt &
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
Traceback (most recent call last):
File "/home/lucasrodrigues/code/transformers-2.3.0/examples/run_lm_finetuning.py", line 713, in <module>
main()
File "/home/lucasrodrigues/code/transformers-2.3.0/examples/run_lm_finetuning.py", line 663, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "/home/lucasrodrigues/code/transformers-2.3.0/examples/run_lm_finetuning.py", line 268, in train
global_step = int(args.model_name_or_path.split("-")[-1].split("/")[0])
ValueError: invalid literal for int() with base 10: 'pytorch'
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Ubuntu
* Python version: 3.6
* PyTorch version: 1.2.0
* PyTorch Transformers version (or branch):
* Using GPU ? 4x GeForce GTX 1080 Ti
* Distributed or parallel setup ? No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
I can't execute the code, could anyone help? | 01-07-2020 00:45:35 | 01-07-2020 00:45:35 | Hi, thank you for raising this issue. Could you please let me know if 27c1b656cca75efa0cc414d3bf4e6aacf24829de fixed this issue by trying the updated script?<|||||>Hello, to solve this problem I added my checkpoint to a folder that has the same Transformer output.
**new folder -> chekpoint-0**
Folders:
|
chekpoint-0
| vocab.txt
| pytorch_model.bin
| config.json
global_step = int(args.model_name_or_path.split("-")[-1].split("/")[0])
**Result:
global_step = 0**<|||||>> Hi, thank you for raising this issue. Could you please let me know if [`27c1b65`](https://github.com/huggingface/transformers/commit/27c1b656cca75efa0cc414d3bf4e6aacf24829de) fixed this issue by trying the updated script?
@LysandreJik, your commit fixed the issue for me, thanks!<|||||>> Hi, thank you for raising this issue. Could you please let me know if [27c1b65](https://github.com/huggingface/transformers/commit/27c1b656cca75efa0cc414d3bf4e6aacf24829de) fixed this issue by trying the updated script?
I think this modification is a terrible one since some people maybe download pytorch-pretrained-models like pytorch-model.bin alone in a dir, but when use this "global_step = int(args.model_name_or_path.split("-")[-1].split("/")[0])" as a number for global_step, what does it mean?@LysandreJik like [this_issue](https://github.com/huggingface/transformers/issues/2258) said. Whoever add "global_step = int(args.model_name_or_path.split("-")[-1].split("/")[0])" could you please fix this bug? I remember there are nothing about it before.<|||||>@severinsimmler, I agree with @zysNLP, this introduces a bug when you try to use a lm that wasn't from checkpoint folder. I used `run_langauge_modeling.py` to output a lm, which I then feed into `run_glue.py`. This pipeline no longer works because `run_glue.py` is trying to parse a global step number from a folder that doesn't have one. Renaming my folder to checkpoint-0 and then feeding it into `run_glue.py` shouldn't have to be done, surely `args.model_name_or_path.split("-")[-1].split("/")[0]` can be modified slightly; so that, it returns `0` instead of `""`; so that, models in non-checkpoint folders can be added.<|||||>@stefan-it solution in https://github.com/huggingface/transformers/issues/2258 fixes the issue. This or something similar should be added to all the example scripts.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,419 | closed | Is there a way to reduce the vocabulary size? | ## ❓ Questions & Help
For fine tuning task is it possible to reduce the vocabulary size?
does simply editing the vocab & config file work?
<!-- A clear and concise description of the question. -->
| 01-07-2020 00:30:35 | 01-07-2020 00:30:35 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,418 | closed | Unclear documentation for indice masking | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): CamemBERT but this probly applies to all MLMs.
Language I am using the model on (English, Chinese....): French
The problem arise when using:
* [x] my own modified scripts, but I suspect that `https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py` is also impacted.
Basically, the masking procedure raises an assertion error device-side when I try to run something akin to:
```
model(labels, masked_lm_labels=labels)
```
I pinpointed the error to be due to the fact that making values to be ignored in the labels with value `-100` like [here in the `run_lm_finetuning.py` script](https://github.com/huggingface/transformers/blob/81d6841b4be25a164235975e5ebdcf99d7a26633/examples/run_lm_finetuning.py#L179) is problably deprecated. The documentation is unclear on the subject, as it says:
> **masked_lm_labels:** (optional) torch.LongTensor of shape (batch_size, sequence_length):
>
> Labels for computing the masked language modeling loss.
> Indices should be in [-1, 0, ..., config.vocab_size] (see input_ids docstring)
> Tokens with indices set to -100 are ignored (masked), the loss is only computed
> for the tokens with labels in [0, ..., config.vocab_size]
As you can see, information is contradictory: on one hand, they say values should be between [-1, vocab_size], but also say like in the script that tokens with values -100 are ignored. I tried, and using value -1 does indeed work.
The task I am working on is:
* [x] my own task or dataset: I am finetuning the CamemBERT pretrained model on a MLM task before reusing the model to a sentence classification one.
## To Reproduce
Steps to reproduce the behavior:
```
import torch
from transformers import CamembertForMaskedLM
model = CamembertForMaskedLM.from_pretrained(
"camembert-base", cache_dir="models/pretrained_camembert"
)
inputs = torch.full((30, 1), 4).to(torch.long)
labels = inputs.clone()
labels[10] = -100
model(inputs, masked_lm_labels=labels)
```
This gives:
```
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /pytorch/aten/src/THNN/generic/ClassNLLCriterion.c:97
```
If you run it on GPU a similar error is raised.
## Expected behavior
Should return a loss.
## Environment
* OS: Ubuntu 18.04
* Python version: 3.6.9
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.2.1
* Using GPU ? Both do not work.
* Distributed of parallel setup ?
* Any other relevant information: Issue can be solved by replacing -1. As I said, I think at some point you switched to using -1 instead of -100 but did not propagate entirely the change to the doc and examples. | 01-06-2020 15:50:24 | 01-06-2020 15:50:24 | Okay my bad it seems this was actually intentional, [this commit](https://github.com/huggingface/transformers/commit/418589244d263087f1d48655f621a65f2a5fcba6 ) was passed and integrated in either version 2.2.2 or 2.3, causing the error on my version. It seems the current proper way to do this is indeed by specifying `-100` as index.
The doc is unclear though, this sentence: `Indices should be in [-1, 0, ..., config.vocab_size]` should be `Indices should be in [-100, 0, ..., config.vocab_size]`.
Anyway cheers, I [PRed](https://github.com/huggingface/transformers/pull/2426) the documentation fix everywhere it's needed if you wanna have a look, but regardless feel free to close this issue.<|||||>@LysandreJik merged the PR for the doc, however I just realized that I incorrectly assumed hte commit was part of 2.3 or 2.2.2, from the merge date of the uniformisation commit. It is currently only in the master branch but not in any tagged version, which means anyone that gets the above bug should switch to -1 until that is the case. Here is the error I got when training on GPU by the way:
```
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106:
void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *,
Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float,
Acctype = float]: block: [0,0,0], thread: [31,0,0]
Assertion `t >= 0 && t < n_classes` failed.
```<|||||>Thanks for figuring this out!
This was a hair-pulling bug due to the fact that the conda package from the pytorch channel has the updated version while a pypi package with a release tag does not...I was wondering why indice masking for bert labels was having such issues in the conda version 1.3.1 and the pip version 1.3.1 (they're labeled as the same version D:)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hello, thanks for sharing.
I also want to finetune the CamemBERT pretrained model on a MLM task for later extraction of sentence embedding then for clustering. I am a bit confused of how to use the Trainer to fine tune.
should I create by myself the masked_lm_labels with indice in [-100, 0, ..., config.vocab_size]? but how should I know which word is masked?
Could you share the piece of codes if it doesn't bother. Thank you in advance. |
transformers | 2,417 | closed | Albert to torchscript is not working | Trying to export torchscript module for AlbertForQuestionAnswering.
```
self.model = AlbertForQuestionAnswering.from_pretrained(self.model_dir)
script_model = torch.jit.script(self.model)
script_model.save("script_model.pt")
```
Getting following exception:
```
Python builtin <built-in function next> is currently not supported in Torchscript:
at /usr/local/lib/python3.6/dist-packages/transformers/modeling_albert.py:523:67
device = input_ids.device if input_ids is not None else inputs_embeds.device
if attention_mask is None:
attention_mask = torch.ones(input_shape, device=device)
if token_type_ids is None:
token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
~~~~ <--- HERE extended_attention_mask=(1.0 - extended_attention_mask) * -10000.0 if head_mask is not None: if
head_mask.dim()==1: head_mask=head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(-1).unsqueeze(-1)
head_mask=head_mask.expand(self.config.num_hidden_layers, -1, -1, -1, -1) elif head_mask.dim()==2:
head_mask=head_mask.unsqueeze(1).unsqueeze(-1).unsqueeze(-1) # We can specify head_mask for each layer
head_mask=head_mask.to(dtype=next(self.parameters()).dtype) # switch to fload if need + fp16 compatibility
else: '__torch__.transformers.modeling_albert.___torch_mangle_15.AlbertModel.forward' is being compiled since it
was called from '__torch__.transformers.modeling_albert.___torch_mangle_14.AlbertForQuestionAnswering.forward'
at /usr/local/lib/python3.6/dist-packages/transformers/modeling_albert.py:767:8 def forward(self,
input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None,
start_positions=None, end_positions=None): outputs=self.albert( ~~~~~~~~~~~~~~~~~~~~~~... <--- HERE
input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids,
head_mask=head_mask, inputs_embeds=inputs_embeds ) sequence_output=outputs[0]
``` | 01-06-2020 13:42:13 | 01-06-2020 13:42:13 | Hi, the models can be traced using the `torch.jit.trace` method, not the `torch.jit.script`. This requires inputs of the same shape that will be used for inference. Here's an example:
```py
from transformers import AlbertForQuestionAnswering
import torch
inputs = torch.tensor([[1,2,3]])
model = AlbertForQuestionAnswering.from_pretrained("albert-base-v1")
script_model = torch.jit.trace(model, inputs)
script_model.save("script_model.pt")
```<|||||>thanks for the help. `torch.jit.trace` works. But I see that traced module perf is worse than untraced, on cpuonly mode. Any suggestion on, what I might be doing wrong.
Architecture: linux-64
OS: ubuntu-1804
GPU: None
CUDA: None
torch: 1.3.1+cpu
transformers: 2.3.0
<|||||>When tracing the model, you will need to run through it once before so that it is traced, which usually takes quite some time. This is necessary to do the just-in-time optimizations. When you run it after this, the performance should be better.
Does the performance improve after the first iteration?
<|||||>My model is initialized like this.
```
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(self.device)
print('loading model...')
# Load your model here
self.tokenizer = AlbertTokenizer.from_pretrained(self.model_dir)
if os.path.isfile('traced_model.pt'):
self.model = torch.jit.load('traced_model.pt')
print('Loading traced model')
print(type(self.model))
else:
self.model = AlbertForQuestionAnswering.from_pretrained(self.model_dir)
print('Loading pytorch bin')
print(type(self.model))
self.model.to(self.device)
self.model.eval()
```
The perf measurement is done like this:
```
model_start = time.perf_counter()
with torch.no_grad():
if isinstance(self.model, torch.jit.ScriptModule):
start_scores, end_scores = self.model(
torch.tensor([all_input_ids])[0].to(self.device),
torch.tensor([all_attention_masks])[0].to(self.device),
torch.tensor([all_token_type_ids])[0].to(self.device)
)
start_scores_cpu = start_scores.cpu().tolist()
end_scores_cpu = end_scores.cpu().tolist()
print({ "TorchScriptExecutedInSec" : time.perf_counter() - model_start})
else:
start_scores, end_scores = self.model(
torch.tensor([all_input_ids])[0].to(self.device),
torch.tensor([all_attention_masks])[0].to(self.device),
torch.tensor([all_token_type_ids])[0].to(self.device)
)
start_scores_cpu = start_scores.cpu().tolist()
end_scores_cpu = end_scores.cpu().tolist()
print({ "PytorchModelExecutedInSec" : time.perf_counter() - model_start})
```
The pytorch untraced latecy is like this (Average: 0.92225 sec):
```
{'PytorchModelExecutedInSec': 0.888714800003072}
{'PytorchModelExecutedInSec': 0.9285387999989325}
{'PytorchModelExecutedInSec': 0.9449487999991106}
{'PytorchModelExecutedInSec': 0.8750040000013541}
{'PytorchModelExecutedInSec': 0.9282080000011774}
{'PytorchModelExecutedInSec': 0.8841497000030358}
{'PytorchModelExecutedInSec': 0.9255469999989145}
{'PytorchModelExecutedInSec': 0.9070025000000896}
{'PytorchModelExecutedInSec': 0.9690179000026546}
{'PytorchModelExecutedInSec': 0.9713676999999734}
```
And traced torchscript model (Average: 0.98375664 sec):
```
{'TorchScriptExecutedInSec': 1.0122946000010415}
{'TorchScriptExecutedInSec': 0.9303289000017685}
{'TorchScriptExecutedInSec': 1.1499014000000898}
{'TorchScriptExecutedInSec': 1.0230705000030866}
{'TorchScriptExecutedInSec': 1.0278947000006156}
{'TorchScriptExecutedInSec': 0.9148064999972121}
{'TorchScriptExecutedInSec': 0.8976871999984724}
{'TorchScriptExecutedInSec': 0.9487294999998994}
{'TorchScriptExecutedInSec': 0.9489730000022973}
{'TorchScriptExecutedInSec': 0.9838801000005333}
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,416 | closed | Fixed answer structure for QAPipeline | Updated answers list in QuestionAnswering pipeline to handle multiple (question, context) pair with (top-k >1) solutions | 01-06-2020 13:20:45 | 01-06-2020 13:20:45 | Hi can you confirm this PR is a subset of #2459 and that we can close it now that #2459 is merged?<|||||>Sure, thanks!!
________________________________
From: Thomas Wolf <[email protected]>
Sent: Monday, January 13, 2020 8:33:47 PM
To: huggingface/transformers <[email protected]>
Cc: Rishabh Manoj (IMT2013035) <[email protected]>; Author <[email protected]>
Subject: Re: [huggingface/transformers] Fixed answer structure for QAPipeline (#2416)
Hi can you confirm this PR is a subset of #2459<https://github.com/huggingface/transformers/pull/2459> and that we can close it now that #2459<https://github.com/huggingface/transformers/pull/2459> is merged?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/2416?email_source=notifications&email_token=ACAOU5V7GHKDDYFHRQHXQPLQ5R7FHA5CNFSM4KDEIJ72YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIZBBAI#issuecomment-573706369>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ACAOU5UZ5LYGXF2PHGZSCCLQ5R7FHANCNFSM4KDEIJ7Q>.
|
transformers | 2,415 | closed | greedy beam search generates same sequence N times | ## ❓ Questions & Help
I am doing greedy beam search (without sampling to avoid randomness) using GPT-2. However, all the returned sequences are same. Why is that the case? Shouldn't it give N best and different sequences?
```python
model = GPT2LMHeadModel.from_pretrained('gpt2-medium').cuda()
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
input_context = 'The dog'
input_ids = torch.tensor(tokenizer.encode(input_context)).unsqueeze(0).cuda()
outputs = model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=3)
for i in range(3):
print('Generated {}: {}'.format(i, tokenizer.decode(outputs[0][i], skip_special_tokens=True)))
```
The resulting output is:
```
Generated 0: The dog was taken to a veterinary clinic for treatment.
The dog's owner said the!
Generated 1: The dog was taken to a veterinary clinic for treatment.
The dog's owner said the!
Generated 2: The dog was taken to a veterinary clinic for treatment.
The dog's owner said the!
```` | 01-06-2020 12:32:26 | 01-06-2020 12:32:26 | Took me most of the day to figure this out, set the `do_sample` arg to `True`
```
outputs = model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=3, do_sample=True)
```<|||||>Actually, I don't want to do sampling because that is random and would give different results each time I run for the same prompt.
I am looking for greedy beam search which should be able to give the static top sequences (which is not happening)! Sad.<|||||>In your case your have 3 parallel beam search going on with a beam of 5 in each case.
But the current beam search only returns the top beam in each case, we don't have an option to return all beams.<|||||>Thanks Thomas. Does that mean `num_return_sequences` is only useful when `do_sample` is `True`?<|||||>That's a good point.
We could probably take the `num_return_sequences` top beams in the case of having beam search + greedy decoding otherwise this option is not useful in this case.
<|||||>Thanks @thomwolf for the clarification. So, in case of greedy decoding, you would do beam search only once and take the top ```num_return_sequences``` ones.
Any rough ETA you have?
<|||||>No ETA, but if you need it now, feel free to make a PR and I or Lysandre will give a look<|||||>Hi, I'm also interested in this feature - did anyone attempt to implement greedy beam search that returns multiple sequences? (Alternatively, has a an idea of which part of the function should be fixed, so I can try it myself)? Thanks!<|||||>See PR #3078 for how this feature is implemented.
The following example:
```
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
input_context = 'The dog'
input_ids = torch.tensor(tokenizer.encode(input_context)).unsqueeze(0)
outputs = model.generate(input_ids=input_ids, num_beams=20, num_return_sequences=3, do_sample=False)
for i in range(3):
print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True)))
```
would produce:
```
Generated 0: The dog was taken to a local hospital, where he was pronounced dead.
The dog was
Generated 1: The dog was taken to a local hospital, where it was treated and released.
The dog
Generated 2: The dog was taken to a local hospital where he was pronounced dead.
The dog's owner
```
<|||||>Thanks @patrickvonplaten for your effort in this. |
transformers | 2,414 | closed | Serializing XLMRobertaTokenizer | I am currently trying to use the XLMRobertaTokenizer in a multiprocessor setting. To do this, the XLMRobertaTokenizer needs to be serializable. Currently XLMRobertaTokenizer is not serializable while other tokenizers such as AlbertTokenizer are.
This PR adds the __getstate__ and __setstate__ methods to XLMRobertaTokenizer so that it can be serialized. | 01-06-2020 11:58:45 | 01-06-2020 11:58:45 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2414?src=pr&el=h1) Report
> Merging [#2414](https://codecov.io/gh/huggingface/transformers/pull/2414?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ffc8eaf53542092271a208a52e881668e753e72?src=pr&el=desc) will **decrease** coverage by `0.04%`.
> The diff coverage is `16.66%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2414?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2414 +/- ##
=========================================
- Coverage 73.24% 73.2% -0.05%
=========================================
Files 87 87
Lines 14989 15000 +11
=========================================
+ Hits 10979 10980 +1
- Misses 4010 4020 +10
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2414?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `32.91% <16.66%> (-3.86%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2414?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2414?src=pr&el=footer). Last update [0ffc8ea...1d332a7](https://codecov.io/gh/huggingface/transformers/pull/2414?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Is anything blocking from merging this? :)
Would help us a lot with parallelizing the preprocessing!<|||||>Awesome! Thanks @LysandreJik :) |
transformers | 2,413 | closed | How to use transformers-cli serve , how to set up on the server side? | ## ❓ Questions & Help
Hi, I would like to make the transformers based models running as a server on a remote machine, as the way bert-as-server did.
I suppose I could call the transformers-cli serve command on the server side, but I haven't find much clue on how to make it running on the client part.
BTW. I am trying to run the serve cmd with localhost like :
transformers-cli serve --task feature-extraction --model distilbert --config distilbert-base-uncased --tokenizer distilbert
and failed for ValueError: Can't find a vocabulary file at path [cached dir file].
and I tried with transformers/src/transformers/__main__.py with same parameters and got same error.
Would you please give me a snippet on how to make the transformers-cli serve work on both sides?
| 01-06-2020 10:24:52 | 01-06-2020 10:24:52 | Hi @zhoudoufu ! You need to fully specify the model:
```bash
transformers-cli serve --task feature-extraction --model distilbert-base-uncased --config distilbert-base-uncased --tokenizer distilbert-base-uncased
```
Then you should be able to call:
```bash
curl -X POST "http://localhost:8888/forward" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"inputs\":\"My name is Morgan\"}"
```
Let us know :) <|||||>It works, thanks @mfuntowicz |
transformers | 2,412 | closed | Update Mish activation function to use torchscript JIT | This PR modifies the implementation of Mish to match that of the [fastai library](https://github.com/fastai/fastai_dev/blob/0f613ba3205990c83de9dba0c8798a9eec5452ce/dev/local/layers.py#L441). A discussion of the benefits of JIT for the Mish function can be found on the [fastai forums](https://forums.fast.ai/t/meet-mish-new-activation-function-possible-successor-to-relu/53299/587). | 01-06-2020 10:23:32 | 01-06-2020 10:23:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=h1) Report
> Merging [#2412](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ffc8eaf53542092271a208a52e881668e753e72?src=pr&el=desc) will **decrease** coverage by `0.03%`.
> The diff coverage is `35.71%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2412 +/- ##
==========================================
- Coverage 73.24% 73.21% -0.04%
==========================================
Files 87 87
Lines 14989 15002 +13
==========================================
+ Hits 10979 10984 +5
- Misses 4010 4018 +8
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2412/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.55% <35.71%> (-1.15%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=footer). Last update [0ffc8ea...286b55b](https://codecov.io/gh/huggingface/transformers/pull/2412?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,411 | closed | What is the difference between T5Model, T5WithLMHeadModel, T5PreTrainedModel? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I notice that for T5 model, there are more choices(T5Model, T5WithLMHeadModel, T5PreTrainedModel) than BERT or GPT. What is the difference between these three? I think all three are pre-trained model. We do not use T5PreTrainedModel in our downstream task code. Besides, the difference between T5Model and T5WithLMHeadModel is that the latter contains one more linear layer at the end. Am I right about these? | 01-06-2020 07:01:32 | 01-06-2020 07:01:32 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,410 | closed | Typo in XLM moses pipeline. | The replacement on for the unicode punct replacement has a mistake at https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py#L477 | 01-06-2020 01:58:20 | 01-06-2020 01:58:20 | Indeed, thanks @alvations !<|||||>BTW, https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py#L621 could also be simplified to the normalizer object from https://github.com/alvations/sacremoses/blob/master/sacremoses/normalize.py#L129
```python
def moses_punct_norm(self, text, lang):
if lang not in self.cache_moses_punct_normalizer:
punct_normalizer = sm.MosesPunctNormalizer(lang=lang,
pre_replace_unicode_punct=True, post_remove_control_chars=True)
self.cache_moses_punct_normalizer[lang] = punct_normalizer
else:
punct_normalizer = self.cache_moses_punct_normalizer[lang]
return punct_normalizer.normalize(text)
```
Then the pipeline at https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py#L635 would just be
```python
def moses_pipeline(self, text, lang):
text = self.moses_punct_norm(text, lang)
return text
``` |
transformers | 2,409 | closed | Error in pipeline() when model left as None | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Default models as per `SUPPORTED_TASKS` config in [pipeline.py](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py)
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: (`pipeline.py`)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (`question-answering`, `ner`, `feature-extraction`, `sentiment-analysis`)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Install `transformers` 2.3.0
2. Run [example](https://github.com/huggingface/transformers#quick-tour-of-pipelines)
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```py
from transformers import pipeline
>>> nlp = pipeline('question-answering')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Continuum\anaconda3\envs\transformers\lib\site-packages\transformers\pipelines.py", line 860, in pipeline
model = models[framework]
TypeError: string indices must be integers
>>> nlp = pipeline('question-answering', model='distilbert-base-uncased-distilled-squad', tokenizer='distilbert-base-uncased')
```
## Expected behavior
Leaving `model`/`tokenizer` args to `None` should not yield error.
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: system='Windows', release='10', version='10.0.17134', machine='AMD64'
* Python version: 3.5.5
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ? No
* Distributed of parallel setup ? No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| 01-05-2020 22:06:03 | 01-05-2020 22:06:03 | Upgraded to Python 3.6.7, and two of _tasks_ (sentiment-analysis and question-answering) works as expected (i.e., no error without specifying `model` args).
The remaining two _tasks_ (ner and feature-extraction) fail on a new (similar) error:
#### feature-extraction
```py
>>> from transformers import pipeline
To use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html
>>> nlp = pipeline('feature-extraction')
Traceback (most recent call last):
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\transformers\modeling_utils.py", line 415, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location='cpu')
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\torch\serialization.py", line 426, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\torch\serialization.py", line 620, in _load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 9211648 more bytes. The file might be corrupted.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\transformers\pipelines.py", line 905, in pipeline
model = model_class.from_pretrained(model, config=config, **model_kwargs)
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\transformers\modeling_auto.py", line 238, in from_pretrained
return DistilBertModel.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\transformers\modeling_utils.py", line 417, in from_pretrained
raise OSError("Unable to load weights from pytorch checkpoint file. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```
#### ner
```py
>>> from transformers import pipeline
To use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html
>>> nlp = pipeline('ner')
Traceback (most recent call last):
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\transformers\modeling_utils.py", line 415, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location='cpu')
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\torch\serialization.py", line 426, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\torch\serialization.py", line 620, in _load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 3733591 more bytes. The file might be corrupted.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\transformers\pipelines.py", line 905, in pipeline
model = model_class.from_pretrained(model, config=config, **model_kwargs)
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\transformers\modeling_auto.py", line 882, in from_pretrained
return BertForTokenClassification.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
File "D:\Continuum\anaconda3\envs\transformers-py36\lib\site-packages\transformers\modeling_utils.py", line 417, in from_pretrained
raise OSError("Unable to load weights from pytorch checkpoint file. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```
#### Troubleshooting attempts
1. Tried specifying `model` args, but Python crashes every time.
2. Tried adding `force_download=True`, but same error as above<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>To close the loop.
Using `transformers-2.5.1` solves the issue.
Thanks! |
transformers | 2,408 | closed | Can't download models or model config | ## ❓ Questions & Help
when I run fine-tune examples as run_squad.py,it turns out error like this:
E:\tensorflow_natural_question\transformers\examples>python run_squad.py --model_type bert --model_name_or_path bert-base-cased --do_train --do_eval --do_lower_case --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --per_gpu_train_batch_size 2 --learning_rate 3e-5 --num_train_epochs 1.0 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/
2020-01-05 23:58:07.219057: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
01/05/2020 23:58:08 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
01/05/2020 23:58:13 - INFO - filelock - Lock 2170508660632 acquired on C:\Users\Administrator\.cache\torch\transformers\b945b69218e98b3e2c95acf911789741307dec43c698d35fad11c1ae28bda352.d7a3af18ce3a2ab7c0f48f04dc8daff45ed9a3ed333b9e9a79d012a0dedf87a6.lock
01/05/2020 23:58:13 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json not found in cache or force_download set to True, downloading to C:\Users\Administrator\.cache\torch\transformers\tmptcqtrh98
win10+pytorch-gpu1.2.0+python3.7.3
| 01-05-2020 16:31:42 | 01-05-2020 16:31:42 | Hi, I'm not sure I see what exactly is your problem ? Was there something following this message, like an error or a warning ?<|||||>OSError: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin' to download pretrained weights.
it is disconnect service<|||||>ok,thanks. |
transformers | 2,407 | closed | [cli] Add support for T5 model conversion | I have added support for converting t5 model from CLI.
| 01-05-2020 14:31:40 | 01-05-2020 14:31:40 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=h1) Report
> Merging [#2407](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/80faf22b4ac194061a08fde09ad8b202118c151e?src=pr&el=desc) will **decrease** coverage by `1.16%`.
> The diff coverage is `0%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2407 +/- ##
==========================================
- Coverage 73.24% 72.08% -1.17%
==========================================
Files 87 87
Lines 14989 14993 +4
==========================================
- Hits 10979 10808 -171
- Misses 4010 4185 +175
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `0% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `54.1% <0%> (-10.15%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.6% <0%> (-2.3%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92% <0%> (-2.23%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.86% <0%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `60.65% <0%> (-0.69%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `67.73% <0%> (-0.59%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=footer). Last update [80faf22...064bddf](https://codecov.io/gh/huggingface/transformers/pull/2407?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,406 | closed | BERT's Embedding/Vocab Size in Code is Different from Provided Pretrained Config | ## 🐛 A Subtle Bug
Hi, I really appreciate your work but I found a subtle problem here. Could you take a look of it?
- The model I am using is **BERT**.
- The language I am using the model on is **English**.
- The problem arises when using:
- The task I am working on is to simply initialize a BERT object with my own modifications to config, i.e., `BertConfig` class.
## To Reproduce
Steps to reproduce the behavior:
Simplely run this line:
```
BertModel.from_pretrained("bert-base-cased",config=BertConfig(output_hidden_states=True))
```
Then we have following error message:
`
File "D:\Anaconda3\envs\gnner\lib\site-packages\transformers\modeling_utils.py", line 486, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for BertModel:
size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([28996, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]).
`
## Expected behavior
It should be good to run instead of reporting issues like that.
## Possible reason
The issue is because of `line 86` in `configuration_bert.py`, where the vocabulary size is **`30522`**. The default vocabulary size I believe should be consistent with that in the config file, i.e., `https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json`, where it's **`28996`**.
## Environment
* OS: Windows 10
* Python version: 3.7.3
* PyTorch version: 1.3.0
* PyTorch Transformers version (or branch): latest pip package
* Using GPU ? It's independent of environments like this I believe.
<!-- Add any other context about the problem here. --> | 01-05-2020 13:57:49 | 01-05-2020 13:57:49 | The default configuration in `configuration_bert.py` is for `bert-base-uncased` model. I am not sure what you are trying to do here will work or not but here is what I would suggest try doing:
First Load configuration manually from `bert-base-case` json. Then change the parameters you want to change and then pass it to `from_pretrained` function<|||||>> The default configuration in `configuration_bert.py` is for `bert-base-uncased` model. I am not sure what you are trying to do here will work or not but here is what I would suggest try doing:
>
> First Load configuration manually from `bert-base-case` json. Then change the parameters you want to change and then pass it to `from_pretrained` function
Thank you NaxAlpha for your immediate reply!
My intention is just simply to get outputs of all hidden layers from a pre-trained BERT and found this 'issue'. Your solution sounds good!
In the future, might it be better to load the corresponding config according to the input parameter, i.e., the string like `bert-base-uncased` or `bert-base-cased`, since the weights are also loaded according to this string?<|||||>Great. I have verified that it is working:
https://colab.research.google.com/drive/1IPgcACm38dIUaj9RqTWw9xbwwywIOpXf<|||||>As @NaxAlpha says, the default parameters are that of the `bert-base-uncased` model. If you wish to instantiate a `BertConfig` from the `bert-base-cased` model with the `output_hidden_states` flag set to `True`, you would do it as follows:
```py
config = BertConfig.from_pretrained("bert-base-cased", output_hidden_states=True)
model = BertModel.from_pretrained("bert-base-cased", config=config)
```<|||||>Thanks, guys.
Your replies solve my question well. <|||||>I am on Ubuntu where also reports
```
Error(s) in loading state_dict for BertForSequenceClassification:
size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([30522, 768]) from checkpoint, the shape in current model is torch.Size([28996, 768]).
```
I am currently checking solutions above.<|||||>> I am on Ubuntu where also reports
>
> ```
> Error(s) in loading state_dict for BertForSequenceClassification:
> size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([30522, 768]) from checkpoint, the shape in current model is torch.Size([28996, 768]).
> ```
>
> I am currently checking solutions above.
Hi, I think it may be better to also post the minimal code to reproduce the issue here. <|||||>Was facing a similar issue previously as I tried to adapt allenai/scibert_scivocab_cased model. I was previously still using the bert config.json. By making sure the config.json matches the model I am using (in my case was the scibert config), was able to bypass this issue. |
transformers | 2,405 | closed | weird resize during the initialization in the PreTrainedModel | Hi
I am using BertForMaskedLM in the run_lm_finetuning.py code. This module call the module of BertLMPredictionHead, in which there is a decoder layer which is of the size of hidden_size*vocab_size. I would like to change the dimension of this layer. when I change it, I realize that during the call to elf.init_weights() inside the BertForMaskedLM, the function resize the weights for the decoder layer. I cannot track where is this exactly happening, thanks for your advice on how I can resolve this issue. | 01-05-2020 12:35:10 | 01-05-2020 12:35:10 | This layer is resized in `self.init_weights` because it is sharing weights with the embedding layer. They need to be the same size.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,404 | closed | Pretrained Model not available | ## ❓ Questions & Help
01/05/2020 12:18:00 - INFO - root - finetuned model not available - loading standard pretrained model
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - Model name '/opt/ml/code/pretrained_models/bert-base-uncased' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1). Assuming '/opt/ml/code/pretrained_models/bert-base-uncased' is a path or url to a directory containing tokenizer files.
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - Didn't find file /opt/ml/code/pretrained_models/bert-base-uncased/added_tokens.json. We won't load it.
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - Didn't find file /opt/ml/code/pretrained_models/bert-base-uncased/special_tokens_map.json. We won't load it.
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - Didn't find file /opt/ml/code/pretrained_models/bert-base-uncased/tokenizer_config.json. We won't load it.
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - loading file /opt/ml/code/pretrained_models/bert-base-uncased/vocab.txt
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - loading file None
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - loading file None
01/05/2020 12:18:00 - INFO - transformers.tokenization_utils - loading file None | 01-05-2020 12:23:29 | 01-05-2020 12:23:29 | Could you describe your issue in more details? e.g. share some code on what you are trying to do and what is not working?<|||||>I have the same issue.

<|||||>I download a pretrained model and unzip it to my path. When I load it through BertTokenizer, it cannot be found. Could you please tell me what/how to check? @NaxAlpha <|||||>the Error is
ValueError : " Can't find a vocabulary file at path **\.cache\**".
it seems the program try to load file through cache instead of assigned path. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>i got the same error from AWS container training... as @punit121 . does it mean it didn't successfully download the pretrained_model?<|||||>@tbs17 Nopes, probably this is path error
can you share screenshot of your code ?
<|||||>Got the same error running in a Docker container, while the same code works completely fine locally.
```
server_1 | INFO:pytorch_transformers.tokenization_utils:Didn't find file model/added_tokens.json. We won't load it.
server_1 | INFO:pytorch_transformers.tokenization_utils:Didn't find file model/special_tokens_map.json. We won't load it.
```
The pretrained model is saved in the path `/model`. This path setup is the same as how I do it locally/separately. But I can't seem to figure out the issue as to why when I integrate this code into a Docker container, it hits these errors. Furthermore, I have confirmed that I am in the right path and am able to access the `model` subdirectory and that the file `model/bert_config.json` is able to be accessed.
Any ideas for how to resolve this issue? <|||||>@catyeo18, these are not errors, it just indicates that your tokenizer has no additional added tokens or special tokens.
If you're using `model.from_pretrained`, please note that the configuration must absolutely be named `config.json`, and not `bert_config.json`.<|||||>@LysandreJik thank you for your response -- I don't understand why that is the case when my tokenizer works fine when I run my scripts locally, but yield those error messages when I run my scripts in a Docker container. There is no difference in my code or file setup.<|||||>@catyeo18 I spent some time digging into the code and I think the reason is that in `transformers/file_utils.py`, if the file is not there but you have internet connection to check, the code just let that fail silently, basically it means that the script check in `.cache` as well as try to download but not found so just ignore it. However, when setup in an environment without internet connnection (docker for example), the script cannot find the file in `cache` but also cannot check if the file is available online since there's no internet, thus it throws the error. |
transformers | 2,403 | closed | Add support for Albert and XLMRoberta for the Glue example | 01-05-2020 10:14:16 | 01-05-2020 10:14:16 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=h1) Report
> Merging [#2403](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/80faf22b4ac194061a08fde09ad8b202118c151e?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2403 +/- ##
=======================================
Coverage 73.24% 73.24%
=======================================
Files 87 87
Lines 14989 14989
=======================================
Hits 10979 10979
Misses 4010 4010
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=footer). Last update [80faf22...ff6dacf](https://codecov.io/gh/huggingface/transformers/pull/2403?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,402 | closed | BertForTokenClassification can not from_pretrained the fine-tuned model? | ## ❓ Questions & Help
Thanks the great work!
While, when I wrap the `run_ner.py `scripts in sklearn api style for non-specialists, I met some problem.
It 's ok for training and evaluating, but when predicting the F1-score is much lower than that of evaluating. As shown in following:
Evaluating result: the F1-score of 0.8242
```
***** Eval results 500 *****
f1 = 0.8101377518505809
loss = 0.10396960769538525
precision = 0.8009887553315238
recall = 0.8194981652286026
***** Eval results 1000 *****
f1 = 0.8242496050552922
loss = 0.09259376796307388
precision = 0.8206035584390052
recall = 0.8279281959734206
```
Predicting Results, the F1-score 0.0934
```
precision recall f1-score support
tim 0.0954 0.0943 0.0949 2014
org 0.0743 0.0688 0.0714 2021
geo 0.1004 0.1087 0.1044 3771
per 0.0843 0.0864 0.0853 1644
gpe 0.1022 0.1010 0.1016 1623
nat 0.3333 0.0769 0.1250 13
art 0.0000 0.0000 0.0000 51
eve 0.0400 0.0476 0.0435 21
micro avg 0.0930 0.0938 0.0934 11158
macro avg 0.0924 0.0938 0.0929 11158
```
**? ?? why? I have check the code many times. Is that the bug in saving or loading the fine-tuned model?**
The fine-tuning and predicting script based on `transformers-sklearn`
```python
import pandas as pd
from sklearn.model_selection import train_test_split
from transformers_sklearn import BERTologyNERClassifer
if __name__ == '__main__':
data_df = pd.read_csv('datasets/gmbner/ner_dataset.csv',encoding="utf8")
data_df.fillna(method="ffill",inplace=True)
value_counts = data_df['Tag'].value_counts()
label_list = list(value_counts.to_dict().keys())
# ## 1. preparing data
X = []
y = []
for label, batch_df in data_df.groupby(by='Sentence #',sort=False):
words = batch_df['Word'].tolist()
labels = batch_df['Tag'].tolist()
assert len(words) == len(labels)
X.append(words)
y.append(labels)
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.1,random_state=520)
## 2. customize model
ner = BERTologyNERClassifer(
labels=label_list,
model_type='bert',
model_name_or_path='bert-base-cased',
data_dir='ts_data/gmbner',
output_dir='results/gmbner',
num_train_epochs=3,
learning_rate=5e-5,
logging_steps=500,
save_steps=500,
overwrite_output_dir=True
)
#
## 3. fit
ner.fit(X_train, y_train)
# # # #
## 4. score
report = ner.score(X_test, y_test)
with open('gmbner.txt', 'w', encoding='utf8') as f:
f.write(report)
```
This is the two scripts in [transformers-sklearn](https://github.com/trueto/transformers_sklearn) for NER task.
`token_classification.py`
```python
import os
import torch
import random
import logging
import numpy as np
from tqdm import tqdm, trange
from torch.nn import CrossEntropyLoss
from torch.utils.data import random_split,TensorDataset,\
DistributedSampler,RandomSampler,SequentialSampler,DataLoader
from tensorboardX import SummaryWriter
from transformers_sklearn.utils.token_classification_utils import get_labels,\
read_examples_from_X_y,convert_examples_to_features
from transformers_sklearn.utils.data_utils import to_numpy
from sklearn.base import BaseEstimator,ClassifierMixin
from transformers_sklearn.utils.metrics_utils import f1_score,recall_score,precision_score,classification_report
from transformers import AdamW, get_linear_schedule_with_warmup
from transformers import BertConfig, BertForTokenClassification, BertTokenizer
from transformers import RobertaConfig, RobertaForTokenClassification, RobertaTokenizer
from transformers import DistilBertConfig, DistilBertForTokenClassification, DistilBertTokenizer
from transformers import CamembertConfig, CamembertForTokenClassification, CamembertTokenizer
# from transformers import AlbertConfig,AlbertTokenizer
from transformers_sklearn.model_albert import AlbertForTokenClassification,AlbertTokenizer,AlbertConfig
ALL_MODELS = sum(
(tuple(conf.pretrained_config_archive_map.keys()) for conf in (BertConfig, RobertaConfig, DistilBertConfig)),
())
MODEL_CLASSES = {
"bert": (BertConfig, BertForTokenClassification, BertTokenizer),
"roberta": (RobertaConfig, RobertaForTokenClassification, RobertaTokenizer),
"distilbert": (DistilBertConfig, DistilBertForTokenClassification, DistilBertTokenizer),
"camembert": (CamembertConfig, CamembertForTokenClassification, CamembertTokenizer),
"albert":(AlbertConfig,AlbertForTokenClassification,AlbertTokenizer)
}
logger = logging.getLogger(__name__)
def set_seed(seed=520,n_gpu=1):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(seed)
class BERTologyNERClassifer(BaseEstimator,ClassifierMixin):
def __init__(self,labels,data_dir='ts_data',model_type='bert',
model_name_or_path='bert-base-chinese',
output_dir='ts_results',config_name='',
tokenizer_name='',cache_dir='model_cache',
max_seq_length=512,do_lower_case=False,
per_gpu_train_batch_size=8,per_gpu_eval_batch_size=8,
gradient_accumulation_steps=1,
learning_rate=5e-5,weight_decay=0.0,
adam_epsilon=1e-8,max_grad_norm=1.0,
num_train_epochs=3.0,max_steps=-1,
warmup_steps=0,logging_steps=50,
save_steps=50,evaluate_during_training=True,
no_cuda=False,overwrite_output_dir=False,
overwrite_cache=False,seed=520,
fp16=False,fp16_opt_level='01',
local_rank=-1,val_fraction=0.1):
self.labels = labels
self.data_dir = data_dir
self.model_type = model_type
self.model_name_or_path = model_name_or_path
self.output_dir = output_dir
self.config_name = config_name
self.tokenizer_name = tokenizer_name
self.max_seq_length = max_seq_length
self.do_lower_case = do_lower_case
self.cache_dir = cache_dir
self.per_gpu_train_batch_size = per_gpu_train_batch_size
self.per_gpu_eval_batch_size = per_gpu_eval_batch_size
self.gradient_accumulation_steps = gradient_accumulation_steps
self.learning_rate = learning_rate
self.weight_decay = weight_decay
self.adam_epsilon = adam_epsilon
self.max_grad_norm = max_grad_norm
self.num_train_epochs = num_train_epochs
self.max_steps = max_steps
self.warmup_steps = warmup_steps
self.logging_steps = logging_steps
self.save_steps = save_steps
self.evaluate_during_training = evaluate_during_training
self.no_cuda = no_cuda
self.overwrite_output_dir = overwrite_output_dir
self.overwrite_cache = overwrite_cache
self.seed = seed
self.fp16 = fp16
self.fp16_opt_level = fp16_opt_level
self.local_rank = local_rank
self.val_fraction = val_fraction
self.id2label = {i: label for i, label in enumerate(self.labels)}
self.label_map = {label: i for i, label in enumerate(self.labels)}
# Setup CUDA, GPU & distributed training
if self.local_rank == -1 or self.no_cuda:
device = torch.device("cuda" if torch.cuda.is_available() and not self.no_cuda else "cpu")
self.n_gpu = torch.cuda.device_count() if not self.no_cuda else 1
else: # Initializes the distributed backend which will take care of sychronizing nodes/GPUs
torch.cuda.set_device(self.local_rank)
device = torch.device("cuda", self.local_rank)
torch.distributed.init_process_group(backend="nccl")
self.n_gpu = 1
self.device = device
# Setup logging
logging.basicConfig(format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if self.local_rank in [-1, 0] else logging.WARN)
logger.warning("Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
self.local_rank, device, self.n_gpu, bool(self.local_rank != -1), self.fp16)
# Set seed
set_seed(seed=self.seed,n_gpu=self.n_gpu)
def fit(self,X,y):
if not os.path.exists(self.data_dir):
os.mkdir(self.data_dir)
if not os.path.exists(self.output_dir):
os.mkdir(self.output_dir)
if os.path.exists(self.output_dir) and os.listdir(
self.output_dir) and not self.overwrite_output_dir:
raise ValueError(
"Output directory ({}) already exists and is not empty. Use --overwrite_output_dir to overcome.".format(
self.output_dir))
num_labels = len(self.labels)
# self.labels = labels
# Use cross entropy ignore index as padding label id so that only real label ids contribute to the loss later
pad_token_label_id = CrossEntropyLoss().ignore_index
self.pad_token_label_id = pad_token_label_id
# Load pretrained model and tokenizer
if self.local_rank not in [-1, 0]:
torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab
self.model_type = self.model_type.lower()
config_class, model_class, tokenizer_class = MODEL_CLASSES[self.model_type]
config = config_class.from_pretrained(self.config_name if self.config_name else self.model_name_or_path,
num_labels=num_labels,
cache_dir=self.cache_dir if self.cache_dir else None,
share_type='all' if self.model_type=='albert' else None)
tokenizer = tokenizer_class.from_pretrained(
self.tokenizer_name if self.tokenizer_name else self.model_name_or_path,
do_lower_case=self.do_lower_case,
cache_dir=self.cache_dir if self.cache_dir else None)
model = model_class.from_pretrained(self.model_name_or_path,
from_tf=bool(".ckpt" in self.model_name_or_path),
config=config,
cache_dir=self.cache_dir if self.cache_dir else None)
if self.local_rank == 0:
torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab
model.to(self.device)
logger.info("Training/evaluation parameters %s", self)
train_dataset = load_and_cache_examples(self, tokenizer, pad_token_label_id, X,y, mode="train")
global_step, tr_loss = train(self, train_dataset,model,tokenizer,pad_token_label_id)
logger.info(" global_step = %s, average loss = %s", global_step, tr_loss)
# Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained()
if self.local_rank == -1 or torch.distributed.get_rank() == 0:
# Create output directory if needed
if not os.path.exists(self.output_dir) and self.local_rank in [-1, 0]:
os.makedirs(self.output_dir)
logger.info("Saving model checkpoint to %s", self.output_dir)
# Save a trained model, configuration and tokenizer using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
model_to_save = model.module if hasattr(model,"module") else model # Take care of distributed/parallel training
model_to_save.save_pretrained(self.output_dir)
tokenizer.save_pretrained(self.output_dir)
# Good practice: save your training arguments together with the trained model
torch.save(self, os.path.join(self.output_dir, "training_args.bin"))
return self
def predict(self,X):
# args = torch.load(os.path.join(self.output_dir, "training_args.bin"))
# Load a trained model and vocabulary that you have fine-tuned
_, model_class, tokenizer_class = MODEL_CLASSES[self.model_type]
model = model_class.from_pretrained(self.output_dir)
tokenizer = tokenizer_class.from_pretrained(self.output_dir)
model.to(self.device)
pad_token_label_id = CrossEntropyLoss().ignore_index
# get dataset
test_dataset = load_and_cache_examples(self,tokenizer,pad_token_label_id,X,y=None,mode='test')
_, preds_list = evaluate(self,test_dataset,model,pad_token_label_id,mode='test')
return preds_list
def score(self, X, y, sample_weight=None):
y_pred = self.predict(X)
return classification_report(y,y_pred,digits=4)
def load_and_cache_examples(args, tokenizer,pad_token_label_id, X,y,mode):
if args.local_rank not in [-1, 0] and mode=='train':
torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache
# Load data features from cache or dataset file
cached_features_file = os.path.join(args.data_dir, "cached_{}_{}_{}".format(mode,
args.model_type,
str(args.max_seq_length)))
if os.path.exists(cached_features_file) and not args.overwrite_cache and mode=='train':
logger.info("Loading features from cached file %s", cached_features_file)
features = torch.load(cached_features_file)
else:
logger.info("Creating features from dataset file at %s", args.data_dir)
examples = read_examples_from_X_y(X,y, mode)
features = convert_examples_to_features(examples, args.label_map, args.max_seq_length, tokenizer,
cls_token_at_end=bool(args.model_type in ["xlnet"]),
# xlnet has a cls token at the end
cls_token=tokenizer.cls_token,
cls_token_segment_id=2 if args.model_type in ["xlnet"] else 0,
sep_token=tokenizer.sep_token,
sep_token_extra=bool(args.model_type in ["roberta"]),
# roberta uses an extra separator b/w pairs of sentences, cf. github.com/pytorch/fairseq/commit/1684e166e3da03f5b600dbb7855cb98ddfcd0805
pad_on_left=bool(args.model_type in ["xlnet"]),
# pad on the left for xlnet
pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0],
pad_token_segment_id=4 if args.model_type in ["xlnet"] else 0,
pad_token_label_id=pad_token_label_id
)
if args.local_rank in [-1, 0] and mode == 'train':
logger.info("Saving features into cached file %s", cached_features_file)
torch.save(features, cached_features_file)
if args.local_rank == 0 and mode =='train':
torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache
# Convert to Tensors and build dataset
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long)
all_label_ids = torch.tensor([f.label_ids for f in features], dtype=torch.long)
dataset = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
return dataset
def train(args, train_dataset, model, tokenizer, pad_token_label_id):
""" Train the model """
if args.local_rank in [-1, 0]:
tb_writer = SummaryWriter()
args.train_batch_size = args.per_gpu_train_batch_size * max(1, args.n_gpu)
val_len = int(len(train_dataset)*args.val_fraction)
train_len = len(train_dataset) - val_len
train_ds, val_ds = random_split(train_dataset,[train_len,val_len])
train_sampler = RandomSampler(train_ds) if args.local_rank == -1 else DistributedSampler(train_ds)
train_dataloader = DataLoader(train_ds, sampler=train_sampler, batch_size=args.train_batch_size)
if args.max_steps > 0:
t_total = args.max_steps
args.num_train_epochs = args.max_steps // (len(train_dataloader) // args.gradient_accumulation_steps) + 1
else:
t_total = len(train_dataloader) // args.gradient_accumulation_steps * args.num_train_epochs
# Prepare optimizer and schedule (linear warmup and decay)
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay},
{"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=args.warmup_steps, num_training_steps=t_total)
# Check if saved optimizer or scheduler states exist
if os.path.isfile(os.path.join(args.model_name_or_path, "optimizer.pt")) and os.path.isfile(
os.path.join(args.model_name_or_path, "scheduler.pt")
):
# Load in optimizer and scheduler states
optimizer.load_state_dict(torch.load(os.path.join(args.model_name_or_path, "optimizer.pt")))
scheduler.load_state_dict(torch.load(os.path.join(args.model_name_or_path, "scheduler.pt")))
if args.fp16:
try:
from apex import amp
except ImportError:
raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level)
# multi-gpu training (should be after apex fp16 initialization)
if args.n_gpu > 1:
model = torch.nn.DataParallel(model)
# Distributed training (should be after apex fp16 initialization)
if args.local_rank != -1:
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank],
output_device=args.local_rank,
find_unused_parameters=True)
# Train!
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_ds))
logger.info(" Num Epochs = %d", args.num_train_epochs)
logger.info(" Instantaneous batch size per GPU = %d", args.per_gpu_train_batch_size)
logger.info(" Total train batch size (w. parallel, distributed & accumulation) = %d",
args.train_batch_size * args.gradient_accumulation_steps * (
torch.distributed.get_world_size() if args.local_rank != -1 else 1))
logger.info(" Gradient Accumulation steps = %d", args.gradient_accumulation_steps)
logger.info(" Total optimization steps = %d", t_total)
global_step = 0
epochs_trained = 0
steps_trained_in_current_epoch = 0
# Check if continuing training from a checkpoint
if os.path.exists(args.model_name_or_path):
# set global_step to gobal_step of last saved checkpoint from model path
global_step = int(args.model_name_or_path.split("-")[-1].split("/")[0])
epochs_trained = global_step // (len(train_dataloader) // args.gradient_accumulation_steps)
steps_trained_in_current_epoch = global_step % (len(train_dataloader) // args.gradient_accumulation_steps)
logger.info(" Continuing training from checkpoint, will skip to saved global_step")
logger.info(" Continuing training from epoch %d", epochs_trained)
logger.info(" Continuing training from global step %d", global_step)
logger.info(" Will skip the first %d steps in the first epoch", steps_trained_in_current_epoch)
tr_loss, logging_loss = 0.0, 0.0
model.zero_grad()
train_iterator = trange(int(args.num_train_epochs), desc="Epoch", disable=args.local_rank not in [-1, 0])
set_seed(seed=args.seed,n_gpu=args.n_gpu) # Added here for reproductibility (even between python 2 and 3)
for _ in train_iterator:
epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0])
for step, batch in enumerate(epoch_iterator):
# Skip past any already trained steps if resuming training
if steps_trained_in_current_epoch > 0:
steps_trained_in_current_epoch -= 1
continue
model.train()
batch = tuple(t.to(args.device) for t in batch)
inputs = {"input_ids": batch[0],
"attention_mask": batch[1],
"labels": batch[3]}
if args.model_type != "distilbert":
inputs["token_type_ids"] = batch[2] if args.model_type in ["bert", "xlnet"] else None # XLM and RoBERTa don"t use segment_ids
outputs = model(**inputs)
loss = outputs[0] # model outputs are always tuple in pytorch-transformers (see doc)
if args.n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu parallel training
if args.gradient_accumulation_steps > 1:
loss = loss / args.gradient_accumulation_steps
if args.fp16:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
tr_loss += loss.item()
if (step + 1) % args.gradient_accumulation_steps == 0:
if args.fp16:
torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.max_grad_norm)
else:
torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
optimizer.step()
scheduler.step() # Update learning rate schedule
model.zero_grad()
global_step += 1
if args.local_rank in [-1, 0] and args.logging_steps > 0 and global_step % args.logging_steps == 0:
# Log metrics
if args.local_rank == -1 and args.evaluate_during_training: # Only evaluate when single GPU otherwise metrics may not average well
results, _ = evaluate(args, val_ds, model,pad_token_label_id,prefix=global_step)
for key, value in results.items():
tb_writer.add_scalar("eval_{}".format(key), value, global_step)
tb_writer.add_scalar("lr", scheduler.get_lr()[0], global_step)
tb_writer.add_scalar("loss", (tr_loss - logging_loss) / args.logging_steps, global_step)
logging_loss = tr_loss
if args.local_rank in [-1, 0] and args.save_steps > 0 and global_step % args.save_steps == 0:
# Save model checkpoint
output_dir = os.path.join(args.output_dir, "checkpoint-{}".format(global_step))
if not os.path.exists(output_dir):
os.makedirs(output_dir)
model_to_save = model.module if hasattr(model, "module") else model # Take care of distributed/parallel training
model_to_save.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)
torch.save(args, os.path.join(output_dir, "training_args.bin"))
logger.info("Saving model checkpoint to %s", output_dir)
torch.save(optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt"))
torch.save(scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt"))
logger.info("Saving optimizer and scheduler states to %s", output_dir)
if args.max_steps > 0 and global_step > args.max_steps:
epoch_iterator.close()
break
if args.max_steps > 0 and global_step > args.max_steps:
train_iterator.close()
break
if args.local_rank in [-1, 0]:
tb_writer.close()
return global_step, tr_loss / global_step
def evaluate(args, eval_dataset, model, pad_token_label_id, mode='dev',prefix=0):
# eval_dataset = load_and_cache_examples(args, tokenizer, labels, pad_token_label_id, mode=mode)
args.eval_batch_size = args.per_gpu_eval_batch_size * max(1, args.n_gpu)
# Note that DistributedSampler samples randomly
eval_sampler = SequentialSampler(eval_dataset) if args.local_rank == -1 else DistributedSampler(eval_dataset)
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.eval_batch_size)
# multi-gpu evaluate
if args.n_gpu > 1 and mode == 'test':
model = torch.nn.DataParallel(model)
# Eval!
if mode == 'dev':
logger.info("***** Running evaluation %s *****", prefix)
else:
logger.info("***** Running predict *****")
logger.info(" Num examples = %d", len(eval_dataset))
logger.info(" Batch size = %d", args.eval_batch_size)
eval_loss = 0.0
nb_eval_steps = 0
preds = None
out_label_ids = None
model.eval()
for batch in tqdm(eval_dataloader, desc="Evaluating" if mode=='dev' else "Predicting"):
batch = tuple(t.to(args.device) for t in batch)
with torch.no_grad():
inputs = {"input_ids": batch[0],"attention_mask": batch[1],"labels": batch[3]}
if args.model_type != "distilbert":
inputs["token_type_ids"] = batch[2] if args.model_type in ["bert", "xlnet"] else None # XLM and RoBERTa don"t use segment_ids
outputs = model(**inputs)
tmp_eval_loss, logits = outputs[:2]
if args.n_gpu > 1:
tmp_eval_loss = tmp_eval_loss.mean() # mean() to average on multi-gpu parallel evaluating
eval_loss += tmp_eval_loss.item()
nb_eval_steps += 1
if preds is None:
preds = logits.detach().cpu().numpy()
out_label_ids = inputs["labels"].detach().cpu().numpy()
else:
preds = np.append(preds, logits.detach().cpu().numpy(), axis=0)
out_label_ids = np.append(out_label_ids, inputs["labels"].detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
preds = np.argmax(preds, axis=2)
# label_map = {i: label for i, label in enumerate(labels)}
out_label_list = [[] for _ in range(out_label_ids.shape[0])]
preds_list = [[] for _ in range(out_label_ids.shape[0])]
for i in range(out_label_ids.shape[0]):
for j in range(out_label_ids.shape[1]):
if out_label_ids[i, j] != pad_token_label_id:
out_label_list[i].append(args.id2label[out_label_ids[i][j]])
preds_list[i].append(args.id2label[preds[i][j]])
results = {
"loss": eval_loss,
"precision": precision_score(out_label_list, preds_list),
"recall": recall_score(out_label_list, preds_list),
"f1": f1_score(out_label_list, preds_list)
}
if mode == 'dev':
output_eval_file = os.path.join(args.output_dir, "eval_results.txt")
with open(output_eval_file, "a") as writer:
logger.info("***** Eval results %d *****",prefix)
writer.write("***** Eval results {} *****".format(prefix))
for key in sorted(results.keys()):
msg = "{} = {}".format(key, str(results[key]))
logger.info(msg)
writer.write(msg)
writer.write('\n')
writer.write('\n')
return results, preds_list
```
<!-- A clear and concise description of the question. -->
| 01-05-2020 03:08:27 | 01-05-2020 03:08:27 | Hi there, I had this same issue.
In my case, it was a tokenizer issue. For
`--tokenizer_name` use "bert-base-multilingual-cased" or "bert-base-multilingual-uncased" solved the problem.<|||||>> Hi there, I had this same issue.
> In my case, it was a tokenizer issue. For
> `--tokenizer_name` use "bert-base-multilingual-cased" or "bert-base-multilingual-uncased" solved the problem.
Thanks! I tried it. But it can't work in my case<|||||>sorry, it's my fault. I read the data in 'latin1' encode, and skiped the whole line when the length of tokens does not equal that of label_ids. Change the csv file as 'utf8' encoding format, then everthing work as excepted! |
transformers | 2,401 | closed | Batch size affecting output. | ## ❓ Questions & Help
When running evaluation, why am i getting slightly different output when running a batch size of 1 compared to batch size greater than 1?
| 01-04-2020 13:44:46 | 01-04-2020 13:44:46 | It is possible to get slightly different results. Could you share more details on which evaluation script are you running and for which model/configuration etc?<|||||>I'm getting having the same issue. But with XLM-R:
I decided to write a simple script to demonstrate the difference between encoding individually and encoding with a batch:
```
import torch
from torchnlp.encoders.text import stack_and_pad_tensors
from torchnlp.utils import lengths_to_mask
from transformers import (BertModel, BertTokenizer, XLMRobertaModel,
XLMRobertaTokenizer)
torch.set_printoptions(precision=6)
def batch_encoder(samples, tokenizer):
batch = []
for sequence in samples:
batch.append(torch.tensor(tokenizer.encode(sequence)))
return stack_and_pad_tensors(batch, tokenizer.pad_token_id)
xlm = XLMRobertaModel.from_pretrained(
'xlm-roberta-base', output_hidden_states=True
)
bert = BertModel.from_pretrained(
'bert-base-multilingual-cased', output_hidden_states=True
)
xlm.eval()
bert.eval()
with torch.no_grad():
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
xlm_tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base')
samples = ["hello world!", "This is a batch and the first sentence will be padded"]
bert_tokens, bert_lengths = batch_encoder(samples, bert_tokenizer)
bert_attention_mask = lengths_to_mask(bert_lengths)
xlm_tokens, xlm_lengths = batch_encoder(samples, bert_tokenizer)
xlm_attention_mask = lengths_to_mask(xlm_lengths)
# Forward
bert_out = bert(input_ids=bert_tokens, attention_mask=bert_attention_mask)
xlm_out = xlm(input_ids=xlm_tokens, attention_mask=xlm_attention_mask)
bert_last_hidden_states, bert_pooler_output, bert_all_layers = bert_out
xlm_last_hidden_states, xlm_pooler_output, xlm_all_layers = xlm_out
# Testing by comparing pooler_out
bert_first_sample_tokens = torch.tensor(bert_tokenizer.encode(samples[0])).unsqueeze(0)
xlm_first_sample_tokens = torch.tensor(xlm_tokenizer.encode(samples[0])).unsqueeze(0)
bert_out = bert(input_ids=bert_first_sample_tokens)
xlm_out = xlm(input_ids=xlm_first_sample_tokens)
_, bert_pooler_output_1 , _ = bert_out
_, xlm_pooler_output_1 , _ = xlm_out
print (bert_pooler_output_1[0][:5])
print (bert_pooler_output[0][:5])
print ()
#assert torch.equal(bert_pooler_output_1[0], bert_pooler_output[0])
print (xlm_pooler_output_1[0][:5])
print (xlm_pooler_output[0][:5])
#assert torch.equal(xlm_pooler_output_1[0], xlm_pooler_output[0])```
```
Script Output:
```
tensor([ 0.264619, 0.191050, 0.120784, -0.024288, -0.186887])
tensor([ 0.264619, 0.191049, 0.120784, -0.024288, -0.186887])
tensor([-0.114997, -0.025624, -0.171540, 0.725383, 0.318024])
tensor([-0.042580, 0.237069, 0.136827, 0.484221, 0.019779])
```
For BERT the results don't change that much... But for XLM-R the results are shockingly different!
Am I missing something?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>unstale<|||||>I think I'm getting a similar issue. I'm using DistilBERT in this case, but depending on the batch size, I see different outputs. The differences are slight, but confusing nonetheless. It seems like the difference happens once the batch size goes beyond 3. All batch sizes beyond 3 are identical, but <=3 and >3 are diffierent. My example:
```import torch
from transformers import DistilBertModel, DistilBertTokenizer
MODEL_NAME = 'distilbert-base-uncased'
distil_model = DistilBertModel.from_pretrained(MODEL_NAME)
distil_tokenizer = DistilBertTokenizer.from_pretrained(MODEL_NAME)
distil_model.eval()
torch.set_printoptions(precision=6)
samples = ["hello world!",
"goodbye world!",
"hello hello!",
"And so on and so on.",
"And so on and so forth."]
cond_output = {}
for cond in [2, 3, 5]:
tokens = distil_tokenizer.batch_encode_plus(
samples[:cond],
pad_to_max_length=True,
return_tensors="pt")
tokens.to(device)
outputs = distil_model(**tokens)
\# just taking the first token of the first sample
cond_output[cond] = outputs[0][:,0][0][:10].cpu().detach().numpy()
print(cond_output)
```
Outputs
```
{2: array([-0.18292062, -0.12333887, 0.1573697 , -0.1744302 , -0.25663155,
-0.20508605, 0.31887087, 0.45650607, -0.21000467, -0.14479966],
dtype=float32), 3: array([-0.18292062, -0.12333887, 0.1573697 , -0.1744302 , -0.25663155,
-0.20508605, 0.31887087, 0.45650607, -0.21000467, -0.14479966],
dtype=float32), 5: array([-0.1829206 , -0.12333884, 0.15736982, -0.1744302 , -0.25663146,
-0.20508616, 0.318871 , 0.45650616, -0.21000458, -0.14479981],
dtype=float32)}
```
Anyone have thoughts here? This causes some confusion when I run an individual sample through the model, as it's not the same as if I run it with 3 other samples.
<|||||>> I'm getting having the same issue. But with XLM-R:
>
> I decided to write a simple script to demonstrate the difference between encoding individually and encoding with a batch:
>
> ```
> import torch
> from torchnlp.encoders.text import stack_and_pad_tensors
> from torchnlp.utils import lengths_to_mask
> from transformers import (BertModel, BertTokenizer, XLMRobertaModel,
> XLMRobertaTokenizer)
>
> torch.set_printoptions(precision=6)
>
> def batch_encoder(samples, tokenizer):
> batch = []
> for sequence in samples:
> batch.append(torch.tensor(tokenizer.encode(sequence)))
> return stack_and_pad_tensors(batch, tokenizer.pad_token_id)
>
> xlm = XLMRobertaModel.from_pretrained(
> 'xlm-roberta-base', output_hidden_states=True
> )
>
> bert = BertModel.from_pretrained(
> 'bert-base-multilingual-cased', output_hidden_states=True
> )
>
>
> xlm.eval()
> bert.eval()
> with torch.no_grad():
> bert_tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
> xlm_tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base')
>
> samples = ["hello world!", "This is a batch and the first sentence will be padded"]
>
> bert_tokens, bert_lengths = batch_encoder(samples, bert_tokenizer)
> bert_attention_mask = lengths_to_mask(bert_lengths)
>
> xlm_tokens, xlm_lengths = batch_encoder(samples, bert_tokenizer)
> xlm_attention_mask = lengths_to_mask(xlm_lengths)
>
> # Forward
> bert_out = bert(input_ids=bert_tokens, attention_mask=bert_attention_mask)
> xlm_out = xlm(input_ids=xlm_tokens, attention_mask=xlm_attention_mask)
> bert_last_hidden_states, bert_pooler_output, bert_all_layers = bert_out
> xlm_last_hidden_states, xlm_pooler_output, xlm_all_layers = xlm_out
>
> # Testing by comparing pooler_out
> bert_first_sample_tokens = torch.tensor(bert_tokenizer.encode(samples[0])).unsqueeze(0)
> xlm_first_sample_tokens = torch.tensor(xlm_tokenizer.encode(samples[0])).unsqueeze(0)
> bert_out = bert(input_ids=bert_first_sample_tokens)
> xlm_out = xlm(input_ids=xlm_first_sample_tokens)
> _, bert_pooler_output_1 , _ = bert_out
> _, xlm_pooler_output_1 , _ = xlm_out
>
> print (bert_pooler_output_1[0][:5])
> print (bert_pooler_output[0][:5])
> print ()
> #assert torch.equal(bert_pooler_output_1[0], bert_pooler_output[0])
>
> print (xlm_pooler_output_1[0][:5])
> print (xlm_pooler_output[0][:5])
>
> #assert torch.equal(xlm_pooler_output_1[0], xlm_pooler_output[0])```
> ```
>
> Script Output:
>
> ```
> tensor([ 0.264619, 0.191050, 0.120784, -0.024288, -0.186887])
> tensor([ 0.264619, 0.191049, 0.120784, -0.024288, -0.186887])
>
> tensor([-0.114997, -0.025624, -0.171540, 0.725383, 0.318024])
> tensor([-0.042580, 0.237069, 0.136827, 0.484221, 0.019779])
> ```
>
> For BERT the results don't change that much... But for XLM-R the results are shockingly different!
>
> Am I missing something?
Also experienced same issue using BertForPreTraining. This doesn't make sense to me --- there's no component in Bert which depends on the batch size. I mean things like BatchNorm in training mode output different results with changed batch sizes. But no such component is in Bert AFAIK. Anything I missed?
Another thing I noticed is that if I use FP16, some instances yield quite different embeddings, but some instances have totally identical embeddings (across different batch sizes). If I use FP32, all instances have only slightly different embeddings (but none of them are identical).<|||||>I'm also facing with this issue. BERT returns different embeddings if I change the batch size. This happens only in the train() mode. Did any one figure out the reason? <|||||>same problem over here, any thoughts about it ?
<|||||>I'm having the same issue with BERT. Slightly differnt outputs, while only changing the batch size. It's driving me crazy, cause I don't understand where's the mistake<|||||>Not working on BERT, but I see this phenomenon also on a transformer I am working on.
Any news? <|||||>Deleted, there is bug 😂<|||||>Having the same issue with T5 model.<|||||>I'm seeing similar issues on a fine-tuned distilbert-base-uncased model, sometimes the norm of the difference of tensors can go up to 0.2 which seems huge to me (for Semantic Search applications it means hundreds of items would move around in the ranking depending on the size of the batch used for computing the embeddings).
Is this issue closed ?
PS: I tried using float64 precision but it makes no difference.<|||||>Having the same issue. Any update?<|||||>Met same issue.
At file transformers/models/roberta/modeling_roberta.py under function RobertaEncoder,
If I call
`layer_outputs = layer_module(`
`hidden_states[:2],`
`attention_mask[:2],`
`layer_head_mask,`
`encoder_hidden_states,`
`encoder_attention_mask,`
`past_key_value,`
`output_attentions,)`
and print
`hidden_states = layer_outputs[0]`
`print(hidden_states[0,0,:10])`
The results are different from the below version:
`layer_outputs = layer_module(`
`hidden_states,`
`attention_mask,`
`layer_head_mask,`
`encoder_hidden_states,`
`encoder_attention_mask,`
`past_key_value,`
`output_attentions,)`
I wonder if this is a bug in the huggingface? The only difference between the two versions for me is I change the input batch size. <|||||>having the same issue with bart model<|||||>Hi! @osanseviero this is the bug I mentioned to you at Khipu. I can reproduce the behavior using @bpben's code with transformers 4.27.1 and torch 2.0.0 on a RTX 3090 GPU. At least for me, it results in consistent generations for models such as Flan-T5 XL, albeit I haven't been able to get it to happen with a minimal enough example. Nevertheless, the issue made by @infinitylogesh mentioning this one shows that more people are struggling with it.
Let me know if I should open a new issue for this. |
transformers | 2,400 | closed | fix #2399 an ImportError in official example | 01-04-2020 13:20:20 | 01-04-2020 13:20:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=h1) Report
> Merging [#2400](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/78528742f169fb9481865aa25726ceca5499e036?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2400 +/- ##
=======================================
Coverage 73.24% 73.24%
=======================================
Files 87 87
Lines 14989 14989
=======================================
Hits 10979 10979
Misses 4010 4010
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=footer). Last update [7852874...71292b3](https://codecov.io/gh/huggingface/transformers/pull/2400?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,399 | closed | import Error from official example caused by fastprogress | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): BERT
Language I am using the model on (English, Chinese....): ALL
The problem arise when using:
* [x] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
V0.2.1 of fastprogress released a couple of days ago seems to cause errors in run_tf_ner.py in the official example.
Traceback (most recent call last):
File "run_tf_ner.py", line 12, in <module>
from fastprogress import master_bar, progress_bar
ImportError: cannot import name 'master_bar' from 'fastprogress' (/usr/local/lib/python3.7/dist-packages/fastprogress/__init__.py)
users need to either downgrade: pip3 install fastprogress==0.1.22
or change the code:
`
from fastprogress.fastprogress import master_bar, progress_bar
`
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS:
* Python version:
* PyTorch version:
* PyTorch Transformers version (or branch):
* Using GPU ?
* Distributed of parallel setup ?
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| 01-04-2020 12:40:06 | 01-04-2020 12:40:06 | Thanks for reporting. (and hello @sgugger :)
I'll merge this to fix the immediate issue, but maybe @jplu can chime in: maybe we don't need the fastprogress dependency here?<|||||>Closed by #2400 <|||||>Oh, I forgot to update the `__init__` with the new version. Will add back the functions there to make compatibility easier. Thanks for the super quick fix!<|||||>Hello!! Thanks @julien-c for pinging me :)
Indeed my code was not compatible with the last version of fastprogress, but I thought to have specified the version in the `requirements.txt` file but apparently the way of installing the transformers framework has changed recently.
@sgugger good job, I like your fix :)
@julien-c fastprogress if (in my opinion) the most convenient progress bar to use for model training, but I can change if it becomes a problem, as you wish.
<|||||>Alright let's use `fastprogress` then! We can clean up the conditional import down the line.<|||||>I change the:
`from fastprogress import master_bar, progress_bar`
to
`from fastprogress.fastprogress import master_bar, progress_bar`
in the ~/fastai/imports/core.py file and it worked |
transformers | 2,398 | closed | Distilbert predicting mask | Hi,
This is probably me doing something wrong, but I can't get distilbert to give me a sensible prediciton when I mask part of a sentence.
This setup for BERT (based on the examples):
```
import logging
logging.basicConfig(level=logging.INFO)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = "Hello how are you doing?"
tokenized_text = tokenizer.tokenize(text)
masked_index = 2
tokenized_text[masked_index] = '[MASK]'
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 0]
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model.eval()
with torch.no_grad():
outputs = model(tokens_tensor, token_type_ids=segments_tensors)
predictions = outputs[0]
predicted_index = torch.argmax(predictions[0, masked_index]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token)
```
gives the correct answer _are_ for _How are you doing?_.
But when I try the same with distilbert:
```
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
text = "Hello how are you doing?"
tokenized_text = tokenizer.tokenize(text)
masked_index = 2
tokenized_text[masked_index] = '[MASK]'
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])
model = DistilBertModel.from_pretrained('distilbert-base-uncased')
model.eval()
with torch.no_grad():
# not adding/adding the segment tokens. when I give those to the model, it throws an error
last_hidden_states = model(tokens_tensor)
outputs = last_hidden_states[0]
predicted_index = torch.argmax(outputs[0], dim=1)[masked_index].item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token)
```
I practically always get some _unusedxxx_ as a result. At first I thought this was because distilbert is a smaller model, but no matter what I try, I keep getting unused, so I am guessing it's something else.
Thanks in advance!
| 01-04-2020 12:31:15 | 01-04-2020 12:31:15 | In full `bert` case you are using `BertForMaskedLM` but for distill bert you are using `DistilBertModel` which is not for masked language modelling. Try using `DistilBertForMaskedLM`. Check it, it works:
https://colab.research.google.com/drive/1GYt9H9QRUa5clFfAke6KPYl0mi4H1F3H<|||||>Well, in hindsight that was obvious. :) Thanks! |
transformers | 2,397 | closed | unable to use distilbert multilingual model | ## ❓ Questions & Help
I'm trying to use the distilbert-base-multilingual-cased model but have been unable to do so.
I have checked and I am using transformers version 2.3.0. I have already tried these things:
1) tokenizer = AutoTokenizer.from_pretrained("https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-multilingual-cased-config.json")
model = AutoModel.from_pretrained("https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-multilingual-cased-pytorch_model.bin")
Gives following error message:
OSError: Model name 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-multilingual-cased-config.json' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-multilingual-cased-config.json' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
2) tokenizer = AutoTokenizer.from_pretrained("distilbert-base-multilingual-cased")
model = AutoModel.from_pretrained("distilbert-base-multilingual-cased")
Gives following error message: OSError: Model name 'distilbert-base-multilingual-cased' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad). We assumed 'distilbert-base-multilingual-cased' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
3) Same as (2) but with DistilBertTokenizer and DistilBertModel.
Can I please get some help in fixing this issue?
| 01-03-2020 23:38:14 | 01-03-2020 23:38:14 | Hi
I have verified that it is working. Could you please share your environment details etc.
https://colab.research.google.com/drive/1Bo0luU5q7bztalw5-trWsvl7G0J6zE10<|||||>It seems you're not actually running on transformers 2.3.0. Could you please tell me the output of this code in your environment?
```py
from transformers import AutoModel, AutoTokenizer, __version__
print(__version__)
model = AutoModel.from_pretrained("distilbert-base-multilingual-cased")
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-multilingual-cased")
```
Thank you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,396 | closed | Model2Model quickstart attention_mask dimensionality problem | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):
BERT-base-uncased
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [X] the official example scripts: (give details)
[model2model tutorial code](https://huggingface.co/transformers/quickstart.html#model2model-example)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Copy the [model2model tutorial code](https://huggingface.co/transformers/quickstart.html#model2model-example) into a new file and run it.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```
Traceback (most recent call last):
File "huggingface_m2m_example.py", line 47, in <module>
outputs = model(question_tensor, answer_tensor, decoder_lm_labels=labels_tensor)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_encoder_decoder.py", line 234, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 997, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 819, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 433, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 394, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 334, in forward
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/lfs1/jwei/anaconda3/envs/low_mt/lib/python3.7/site-packages/transformers-2.3.0-py3.7.egg/transformers/modeling_bert.py", line 257, in forward
RuntimeError: The size of tensor a (8) must match the size of tensor b (768) at non-singleton dimension 3
```
I printed out the attention masks and the attention scores [right before](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L253) and got the following:
```
question_tensor.shape: torch.Size([1, 7])
answer_tensor.shape: torch.Size([1, 8])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 7, 7])
attention_masks: torch.Size([1, 1, 1, 7])
I am a decoder
BertSelfAttention is being called
attention_scores: torch.Size([1, 12, 8, 8])
attention_masks: torch.Size([1, 1, 7, 768])
```
It looks like this is the first time that cross attention is being called. The `is_decoder` flag is being passed as `False` in the tutorial code and we changed it to `True` in the code ourselves. The error is the same irrespective of that change.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The code runs as given in the tutorial.
## Environment
* OS: Linux
* Python version: 3.7.5
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): Latest master branch
* Using GPU ? Yes
* Distributed of parallel setup ? No
* Any other relevant information:
| 01-03-2020 23:13:43 | 01-03-2020 23:13:43 | same issue:
Linux (ubuntu 18.04.3 LTS)
Python 3.6.9
Torch Version: 1.3.1
no GPU - regular DELL box,
transformers installed following this part on installation guide (under python3 venv):
...
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
...
Traceback (most recent call last):
File "/home/jimihendrix/projects/transformers/albert/quickstart4_model2model.py", line 65, in <module>
outputs = model(question_tensor, answer_tensor, decoder_lm_labels=labels_tensor)
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_encoder_decoder.py", line 234, in forward
decoder_outputs = self.decoder(decoder_input_ids, encoder_hidden_states, **kwargs_decoder)
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 986, in forward
encoder_attention_mask=encoder_attention_mask,
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 808, in forward
encoder_attention_mask=encoder_extended_attention_mask,
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 422, in forward
hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 383, in forward
self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 329, in forward
hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jimihendrix/projects/transformers/venv3/lib/python3.6/site-packages/transformers/modeling_bert.py", line 253, in forward
attention_scores = attention_scores + attention_mask
RuntimeError: The size of tensor a (8) must match the size of tensor b (768) at non-singleton dimension 3
<|||||>Indeed, I could reproduce this issue. Thanks for raising it!
My attempt at fixing it is [here](https://github.com/huggingface/transformers/pull/2452).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,395 | closed | ALBERT pretrained models uses wrong type of GELU activation | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): ALBERT
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] my own modified scripts: i'm fine-tuning albert for multilabel-classifcation problem and then converting model into tf-lite format
The tasks I am working on is:
* [x] my own task or dataset: multilable text classification on SemEval 2018 task 1:E-c
## To Reproduce
Steps to reproduce the behavior:
1. Open any link to pretrained configuration at [transformers/configuration_albert.py](https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_albert.py). For example: [albert-base-v2-config.json](https://s3.amazonaws.com/models.huggingface.co/bert/albert-base-v2-config.json). But problem applies to all the pretrained configs.
2. Check the value of property "hidden_act" (it will be gelu)
3. Realize, that gelu stands for bert-like implementation(see code [here](https://github.com/huggingface/transformers/blob/1ae63e4e097fe26c900783dd5c1710cf562e222e/src/transformers/modeling_bert.py#L152)), while original [code](https://github.com/google-research/ALBERT/blob/e350db671ae96b8345cd2c0ee1306713642b9810/modeling.py#L296) uses OpenAI-GPT - like gelu (it is defined in transformers as "gelu_new")
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
All the configuration files should contain "gelu_new" under "hidden_act" key.
## Environment
* OS: doesn't matter
* Python version: 3.7
* PyTorch version: 1.3.0
* PyTorch Transformers version (or branch): master
* Using GPU: doesn't matter
* Distributed of parallel setup: doesn't matter
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
While it possibly doesn't significantly change the model performance, it makes conferting model into tf-lite format a lot more tricky, because BERT-like gelu implementation uses tf.math.erf function, which is not in a tflite-builtins set. So i have to use tflite-select ops, which leads to using more heavy tflite-interpreter on android side. Also, it makes impossible to check the converted model performance on python side because default tensorflow-lite python interpreter can't interpret select-ops models. | 01-03-2020 21:17:01 | 01-03-2020 21:17:01 | If you just change your config.json's `hidden_act` value locally you should still be able to load the pretrained weights and be able to convert the model to TFLite, right? <|||||>Yes. Another option (my current workaround) is to explicitly specify hidden_act when creating model instance (via .from_pretrained(...)) during fine-tuning stage. <|||||>It looks like you have fixed all ALBERT config files. Thanks!) |
transformers | 2,394 | closed | Pretrained model installation issue | I run the script for this repo "https://github.com/alexa/wqa_tanda" in which i need to run the run_glue.py file from Transformer Model, while running that script it gives an error-
Couldn't reach server at "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json to download pretrained model configuration file" as shown -

| 01-03-2020 16:19:53 | 01-03-2020 16:19:53 | The file on S3 seems accessible right now. Did you try to reach it directly from your machine and from console to check you have no network issue?<|||||>No, my network is having no issues I verified it, and from console it is not accessible. The required pretrained model is to be installed from command only as per code. The same issue still exists.
<|||||>Please, could you try calling python directly from console to check we have the same behavior?
```python
from transformers.modeling_auto import AutoModel
AutoModel.from_pretrained("bert-base-uncased")
```
if it doesn't fail and shows the model (as it happens in my console), it means you have access to the S3 and the issue is somewhere else.<|||||>In the same py file I need to write this lines ?
Yes, I'm calling python from console from initial phase.
<|||||>no need of a py file, just in a python3 console, you can type those commands directly (as long as you have pip installed transformers)... if you haven't a recent version with `AutoModel` available, use `BertModel` instead.<|||||>Even after doing so as you told the same issue exist, from both AutoModel and BertModel.

<|||||>let's look at what I have on my console:
```python
>>> BertModel.from_pretrained("bert-base-uncased", force_download=True)
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 313/313 [00:00<00:00, 29.5kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 440M/440M [00:23<00:00, 18.8MB/s]
BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm(torch.Size([768]), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1)
)
(encoder): BertEncoder(
...
```
So it finds the files and model from my network.
So that's why I wonder whether you haven't something in your network that prevents you from reaching S3.<|||||>On my console I didn't knew how it is not working I tried the same as you told to do so. I think I should try some another way to sort this issue. On your note you are correct but in my machine it doesn't works. Thanks buddy!<|||||>I saw that you were working on Windows, right? Are you sure you have no firewall/antivirus software blocking access or locking files or anything else? Just giving some ideas ;)
Anyway, you're welcome!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,393 | closed | run_squad_w_distillation update | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): DistilBERT and BERT
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: run_squad_w_distillation.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: SQuAD
## To Reproduce
Steps to reproduce the behavior:
1. Fine tuning on question answering from BERT to DistilBERT
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
The files utils_squad and utils_squad_evaluate are missing right now, so to make this script working I had to restore the files from previous version of this repo. What I expected is to use the script without using tricks
## Environment
* OS: Ubuntu 16.04
* Python version: 3.5.6
* PyTorch version: 1.3.1
* Using GPU nvidia P100 (2x) or 1080Ti
## Additional context
<!-- Add any other context about the problem here. -->
| 01-03-2020 16:15:58 | 01-03-2020 16:15:58 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,392 | closed | Unable to download community models | ## 🐛 Bug
Model I am using (Bert, XLNet....): `bert-base-cased-finetuned-conll03-english`
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: running a small snippet from docs (see below)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: just trying to load the model at this stage
## To Reproduce
Steps to reproduce the behavior:
I'm following the instructions at https://huggingface.co/bert-large-cased-finetuned-conll03-english but failing at the first hurdle. This is the snippet from the docs that I've run:
```python
tokenizer = AutoTokenizer.from_pretrained("bert-large-cased-finetuned-conll03-english")
model = AutoModel.from_pretrained("bert-large-cased-finetuned-conll03-english")
```
It fails with this message:
```
OSError: Model name 'bert-base-cased-finetuned-conll03-english' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-japanese, bert-base-japanese-whole-word-masking, bert-base-japanese-char, bert-base-japanese-char-whole-word-masking, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english/config.json' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.
```
The message mentions looking at https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english/config.json and finding nothing.
I also tried with the CLI: `transformers-cli download bert-base-cased-finetuned-conll03-english` but I'm afraid that failed with a similar message. However both methods work for the namespaced models, e.g. `dbmdz/bert-base-italian-cased`.
## Expected behavior
The community model should download. :)
## Environment
* OS: openSUSE Tumbleweed 20200101
* Python version: 3.7
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ? n/a
* Distributed of parallel setup ? n/a
* Any other relevant information:
## Additional context
I browsed https://s3.amazonaws.com/models.huggingface.co/ and see that the model is there, but paths are like:
https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english-config.json
rather than:
https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english/config.json
(note `-config.json` vs `/config.json`)
If I download the files manually and rename, the model loads. So it looks like just a naming problem. | 01-03-2020 14:55:43 | 01-03-2020 14:55:43 | I confirm what you see... in current master code, `bert-large-cased-finetuned-conll03-english` has no mapping in tokenizers or models so it can't find it in the same way as `bert-base-uncased` for example.
but it works if you target it directly:
```python
AutoTokenizer.from_pretrained("https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english-config.json")
AutoModel.from_pretrained("https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-conll03-english-pytorch_model.bin")
```<|||||>Hmm, I think I see the issue. @stefan-it @mfuntowicz we could either:
- move `bert-large-cased-finetuned-conll03-english` to `dbmdz/bert-large-cased-finetuned-conll03-english`
- or add shortcut model names inside the codebase (config, model, tokenizer)
What do you think?
(also kinda related to #2281)<|||||>@julien-c I think it would be better to move the model under the `dbmdz` namespace - as it is no "official" model!<|||||>@julien-c moving to *dbmdz* is fine. We need to update the default NER pipeline's model provider to reflect the new path. <|||||>Model now lives at https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english
Let me know if everything works correctly!<|||||>Works perfectly now, thanks! |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.