repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 4,393 | closed | BertTokenizerFast does not load custom vocab created from Tokenizer library | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Bert ["iuliaturc/bert_uncased_L-2_H-128_A-2"]
https://huggingface.co/iuliaturc/bert_uncased_L-2_H-128_A-2
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
my own modified scripts: (give details below)
The tasks I am working on is:
my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
see comments in the code
1. Train model and tokenizer on new tokenizers library
2. try to add new tokens to tokenizer
3. Resize token embeddings for model
### Problem 1 :
If Loading with AutoTokenizer, can not use BertTokenizerFast but able to use custom vocabs
4. save tokenizer using save_tokenizer
5. Load using AutoTokenizer
### Problem 2:
If Loading with BertTokenizerFast, can use the functioanlity of BertTokenizerFast but losed the custom vocabs
4. save tokenizer using save_tokenizer
5. Load using BertTokenizerFast
```python
# Loaded Model and Tokenizer Default
tokenizer = BertTokenizerFast.from_pretrained("""iuliaturc/bert_uncased_L-2_H-128_A-2""")
model = AutoModel.from_pretrained("""iuliaturc/bert_uncased_L-2_H-128_A-2""")
# Added Custom vocab to the tokenizer
def add_vocab_to_model(df, model, tokenizer, old_vocab, vocab_size=30000):
"""Adds new vocab to tokenizer and randomly initialises rows for new vocab in the model"""
PATH = Path('./lm_data')
PATH.mkdir(exist_ok=True)
df.dropna(inplace=True)
lm_text = ' '.join(df['text'])
lm_text = lm_text.replace("■","")
lm_text = re.sub(r'[^\x00-\x7F]+',' ', lm_text)
lm_text = re.sub(r'[^\u0000-\u007F]+',' ', lm_text)
with open('./lm_data/data.txt', mode='w') as f:
f.write(lm_text)
paths = [str(x) for x in Path("./lm_data/").glob("**/*.txt")]
# Initialize a tokenizer
tokenizer_new = BertWordPieceTokenizer(old_vocab, lowercase=True)
# Customize training
tokenizer_new.train(files=paths, vocab_size=vocab_size, min_frequency=20)
tokenizer_new.save(".", "./lm_data/new")
new_vocab = open('./lm_data/new-vocab.txt', 'r').read().split('\n')
new_vocab.remove('')
print('Adding new tokens to vocab')
n_orig_tokens = len(tokenizer)
tokenizer.add_tokens(new_vocab)
print('Original no. of tokens: %s'%n_orig_tokens)
print('Final no. of tokens: %s'%len(tokenizer))
print('Initialised random emb for new tokens')
model.resize_token_embeddings(len(tokenizer))
return model, tokenizer
# Just needs to pass a dataframe which is having a column with name "text"
model1, tokenizer1 = add_vocab_to_model(dataframe, model, tokenizer, 'bert-base-uncased-vocab.txt', vocab_size=10000)
#Adding new tokens to vocab
#Original no. of tokens: 30522
#Final no. of tokens: 34340
#Initialised random emb for new tokens
print(len(tokenizer1))
# Got the result : 34340
#Finalized Result
print(type(tokenizer1))
# transformers.tokenization_bert.BertTokenizerFast
# Final Tokenizer
# Problem 1 :
# If Loading with AutoTokenizer, can not use BertTokenizerFast but able to use custom vocabs
# Saved and Loaded Model Again
tokenizer1.save_pretrained("./tokenizer")
tokenizer2 = AutoTokenizer.from_pretrained("./tokenizer")
print(len(tokenizer2)
# Got 34340
# Which is Correct.
print(type(tokenizer2))
# transformers.tokenization_bert.BertTokenizer
# Not able to load BertTokenizerFast defaulting back to BertTokenizer
# Problem 2:
# If Loading with BertTokenizerFast, can use the functioanlity of BertTokenizerFast but losed the custom vocabs
# Saved and Loaded Model Again
tokenizer1.save_pretrained("./tokenizer")
tokenizer3 = BertTokenizerFast.from_pretrained("./tokenizer")
print(len(tokenizer3)
# Got 30522
# Losing the added custom vocabs
print(type(tokenizer3))
# transformers.tokenization_bert.BertTokenizerFast
#which is correct
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
1. If Loading tokenizer using AutoTokenizer it should load the BertTokenizerFast
2. If Loading tokenizer using BertTokenizerFast, it should maintain custom vocabs
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Windows
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?): NA
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-15-2020 21:16:57 | 05-15-2020 21:16:57 | Hi,
Can anyone from Huggingface team look into this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Similar to #6025 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,392 | closed | MarianMTModel translate {tgt}-{src} | Hello!
It is possible to use `MarianMTModels` for translation from the target language to source, i.e. backward? | 05-15-2020 18:43:31 | 05-15-2020 18:43:31 | Nope. Often the reverse model exists, however, e.g. `en-fr` and `fr-en`.<|||||>> Nope. Often the reverse model exists, however, e.g. `en-fr` and `fr-en`.
What if reverse model does not exist for my langauge? <|||||>What's your language?
Maybe it's part of a multilingual group (below) or has been [trained](http://opus.nlpl.eu/Opus-MT/) since we did the conversion?
Otherwise, you're out of luck unfortunately.
```
GROUP_MEMBERS = {
'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'],
'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'],
'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'],
'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'],
'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv']
}
```<|||||>> What's your language?
> Maybe it's part of a multilingual group (below) or has been [trained](http://opus.nlpl.eu/Opus-MT/) since we did the conversion?
> Otherwise, you're out of luck unfortunately.
>
> ```
> GROUP_MEMBERS = {
> 'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'],
> 'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'],
> 'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
> 'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
> 'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'],
> 'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'],
> 'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv']
> }
> ```
Swahili, oooh! |
transformers | 4,391 | closed | [PretrainedTokenizer] is <unk> a special token? | This confused me a lot as I was trying to add common tests for Marian.
I believe special_tokens_mask [must](
https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L445) return 0 for positions with unk_token_id, but `<unk>` is in `SpecialTokensMixin.all_special_ids`.
Relatedly, this [RobertaTokenizer](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_roberta.py#L204) code suggests there should be a relationship between `all_special_ids` and `special_token_mask` logic.
Maybe `<unk>` shouldnt be in `SpecialTokensMixin.all_special_ids`?
| 05-15-2020 18:26:51 | 05-15-2020 18:26:51 | @mfuntowicz @LysandreJik might understand the best!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,390 | closed | [cleanup] test_tokenization_common.py |
- TesterMixin implements some oft repeated functions
- split long tests into multiple tests cases for better tracebacks
- misc typos
- type hints | 05-15-2020 18:05:54 | 05-15-2020 18:05:54 | |
transformers | 4,389 | closed | [MarianTokenizer] implement save_vocabulary and other common methods | - adds `test_tokenization_marian.py` which runs the common tokenizer tests.
- adds `save_vocabulary` and other methods to `MarianTokenizer` to make the common tests pass.
| 05-15-2020 17:53:33 | 05-15-2020 17:53:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=h1) Report
> Merging [#4389](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/48c3a70b4eaedab1dd9ad49990cfaa4d6cb8f6a0&el=desc) will **decrease** coverage by `0.37%`.
> The diff coverage is `96.22%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4389 +/- ##
==========================================
- Coverage 78.41% 78.03% -0.38%
==========================================
Files 123 123
Lines 20432 20477 +45
==========================================
- Hits 16021 15980 -41
- Misses 4411 4497 +86
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.10% <96.22%> (+5.77%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=footer). Last update [48c3a70...1bd2df5](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Can anyone help with this issue: #5040 ? |
transformers | 4,388 | closed | [docs] AutoModelWithLMHead(model_name, **kwargs) | `AutoModelWithLMHead.from_pretrained` cannot accept `output_attentions=True`
This [docstring](https://huggingface.co/transformers/model_doc/auto.html?highlight=transformers%20automodelwithlmhead#transformers.AutoModelWithLMHead) suggests that it can.
Thanks @jrvc for discovering!
```python
import transformers
modelname='Helsinki-NLP/opus-mt-en-de'
config_overrider={'output_attentions':True, 'output_hidden_states':True}
self.model = transformers.AutoModelWithLMHead.from_pretrained(modelname, **config_overrider)
=>
*** TypeError: __init__() got an unexpected keyword argument 'output_attentions'
```
I will dig deeper! | 05-15-2020 15:37:27 | 05-15-2020 15:37:27 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>unstale?<|||||>This was fixed with https://github.com/huggingface/transformers/pull/5665! |
transformers | 4,387 | closed | [Bart/Marian] ignore output_attentions when invoked through AutoModel | [ ] should mimic behavior of other models
[ ] backwards compat? | 05-15-2020 15:16:28 | 05-15-2020 15:16:28 | this works for me:
```python
modelname='Helsinki-NLP/opus-mt-en-de'
model = MarianMTModel.from_pretrained(modelname,output_attentions=True, output_hidden_states=True)
output_tuple = model(**model.dummy_inputs)
assert len(output_tuple) == 6
```<|||||>#4388 is the real issue. |
transformers | 4,386 | closed | Finetuning BERT classifier on a non-GLUE dataset in GLUE format | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
Hello,
I have a dataset of my own, formatted in GLUE format. Is it possible to use the script `run_glue.py` to finetune BERT on this dataset? If that's not possible, is there an alternative script for finetuning a classifier on arbitrary GLUE-formatted dataset?
Thanks!
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| 05-15-2020 14:08:39 | 05-15-2020 14:08:39 | The (new) run_glue.py is just a short template of code that you can copy/paste/customize (e.g. to plug your own Dataset), thanks to the [Trainer](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py).<|||||>Thanks! which class would replace `GlueDataset` in that case? and what argument should I pass instead of `task_name`? and how to set the number of labels, which in `run_glue.py` is defined by `glue_tasks_num_labels`? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,385 | closed | Should return overflowing information for the log | closes #4380 | 05-15-2020 13:46:31 | 05-15-2020 13:46:31 | |
transformers | 4,384 | closed | Attempt to unpin torch version for Github Action. | 05-15-2020 13:39:21 | 05-15-2020 13:39:21 | ||
transformers | 4,383 | closed | Issue with lr during training from scratch | I'm trying to resume my pretraining of RoBERTa model and I noticed that everything is OK except learning rate. I thought that it should continue to decrease from last checkpoint (lr = 7e-7) but lr is much bigger (lr = 5e-5) than it has to be.
this is my parameters to run script:
- --model_name_or_path ./ROBERTA-small-v1/checkpoint-30000
- --train_data_file TEXT.txt
- --output_dir ./ROBERTA-small-v2
- --mlm
- --config_name ./ROBERTA_SMILES-v1/checkpoint-30000
- --tokenizer_name ./ROBERTA
- --do_train
- --line_by_line
- --num_train_epochs 10
- --save_total_limit 1
- --save_steps 2000
- --per_gpu_train_batch_size 16
- --seed 42
- --logging_steps 250
| 05-15-2020 12:28:26 | 05-15-2020 12:28:26 | Are you restarting from your checkpoint with the exact same set of script parameters?<|||||>@julien-c Actually, no. And I guess I undestood what the reason of that behavior is. Originally I used to train with number of epochs = 5. After that I selected 10 epochs and hoped that it'll continue pretraining from 5'th epoch up to 10'th. However as far as I understand lr-value is calculated based on the number of epochs. The current value of lr is the same as I got at 15k iteration (30k/2). So I have should selected 10 (or whatever number) epochs originally<|||||>Yes. You can always implement this behaviour yourself if you need it. |
transformers | 4,382 | closed | Need clarity on training Albert from scratch | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
Using transformers 2.9.1
I am following EsperBERTo tutorial for Training Albert LM from scratch. I am able to start training for roberta but for Albert getting tokenizer issue:
```
05/15/2020 16:23:29 - INFO - transformers.training_args - PyTorch: setting up devices
05/15/2020 16:23:29 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: False
05/15/2020 16:23:29 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='./EsperBERTo-small-v1', overwrite_output_dir=False, do_train=True, do_eval=False, do_predict=False, evaluate_during_training=False, per_gpu_train_batch_size=16, per_gpu_eval_batch_size=8, gradient_accumulation_steps=1, learning_rate=0.0001, weight_decay=0.0, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, warmup_steps=0, logging_dir=None, logging_first_step=False, logging_steps=500, save_steps=2000, save_total_limit=2, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False)
05/15/2020 16:23:29 - INFO - transformers.configuration_utils - loading configuration file ./EsperBERTo/config.json
05/15/2020 16:23:29 - INFO - transformers.configuration_utils - Model config AlbertConfig {
"architectures": [
"AlbertForMaskedLM"
],
"attention_probs_dropout_prob": 0,
"bos_token_id": 2,
"classifier_dropout_prob": 0.1,
"down_scale_factor": 1,
"embedding_size": 128,
"eos_token_id": 3,
"gap_size": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"hidden_size": 768,
"initializer_range": 0.02,
"inner_group_num": 1,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "albert",
"net_structure_type": 0,
"num_attention_heads": 12,
"num_hidden_groups": 1,
"num_hidden_layers": 12,
"num_memory_blocks": 0,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 52000
}
05/15/2020 16:23:29 - INFO - transformers.configuration_utils - loading configuration file ./EsperBERTo/config.json
05/15/2020 16:23:29 - INFO - transformers.configuration_utils - Model config AlbertConfig {
"architectures": [
"AlbertForMaskedLM"
],
"attention_probs_dropout_prob": 0,
"bos_token_id": 2,
"classifier_dropout_prob": 0.1,
"down_scale_factor": 1,
"embedding_size": 128,
"eos_token_id": 3,
"gap_size": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"hidden_size": 768,
"initializer_range": 0.02,
"inner_group_num": 1,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "albert",
"net_structure_type": 0,
"num_attention_heads": 12,
"num_hidden_groups": 1,
"num_hidden_layers": 12,
"num_memory_blocks": 0,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 52000
}
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - Model name './EsperBERTo' not found in model shortcut name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). Assuming './EsperBERTo' is a path, a model identifier, or url to a directory containing tokenizer files.
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - Didn't find file ./EsperBERTo/spiece.model. We won't load it.
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - Didn't find file ./EsperBERTo/added_tokens.json. We won't load it.
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - Didn't find file ./EsperBERTo/special_tokens_map.json. We won't load it.
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - loading file None
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - loading file None
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - loading file None
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - loading file ./EsperBERTo/tokenizer_config.json
Traceback (most recent call last):
File "/home/gamut/Downloads/transformers_source/examples/language-modeling/run_language_modeling.py", line 289, in <module>
main()
File "/home/gamut/Downloads/transformers_source/examples/language-modeling/run_language_modeling.py", line 188, in main
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/transformers/tokenization_auto.py", line 203, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 902, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 1055, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/transformers/tokenization_albert.py", line 155, in __init__
self.sp_model.Load(vocab_file)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/sentencepiece.py", line 275, in _sentencepiece_processor_load
return self._Load_native(model_file)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/sentencepiece.py", line 75, in Load
return _sentencepiece.SentencePieceProcessor_Load(self, filename)
TypeError: not a string
CPU times: user 13.6 ms, sys: 27.4 ms, total: 41.1 ms
Wall time: 1.12 s
```
I guess it was looking for spiece model. So I added spice.model in EsperBERTo folder and it started training.
Now I confuse as vocab it is using now is generated using Huggingface tokenizer and taking spice.model also... can any one clarify whats going on here?
| 05-15-2020 12:11:23 | 05-15-2020 12:11:23 | The Albert original model has a SentencePiece tokenizer, so the `AlbertTokenizer` can only handle SentencePiece vocabularies right now. The `tokenizers` library doesn't handle these vocabularies as of now.
We're still thinking of how we should proceed when training a model from scratch with a tokenizer different to the original, as right now the model-tokenizer pairs have a fixed tokenizer backend. cc @julien-c @thomwolf <|||||>@LysandreJik But when I added spiece.model it started training. So is it taking vocab created by tokenizer library along with spiece.model?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Does this issue have been solved? When can we train the lm from scratch?<|||||>You can already train an LM from scratch, there are examples in `examples/language-modeling`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Are there any updates on training Albert from scratch? Perhaps via the Trainer?<|||||>Hi, a `DataCollatorForSOP` has been added to the library, which you can indeed use with the `Trainer` object. You can see it's implementation here:
https://github.com/huggingface/transformers/blob/6303b5a7185fba43830db0cbb06c61861f57ddff/src/transformers/data/data_collator.py#L201-L206<|||||>> Hi, a `DataCollatorForSOP` has been added to the library, which you can indeed use with the `Trainer` object. You can see it's implementation here:
>
> https://github.com/huggingface/transformers/blob/6303b5a7185fba43830db0cbb06c61861f57ddff/src/transformers/data/data_collator.py#L201-L206
When i try to use this `DataCollatorForSOP` it says that it is deprecated and that i should use `DataCollatorForLanguageModeling`. It is however not clear whether this data collator also produces the objective sentence order prediction as in the original ALBERT paper. Can anyone verify whether it does this or not? |
transformers | 4,381 | closed | Unknown task fill-mask | # 🐛 Bug
## Information
I am trying to create a "Pipeline" for fill-mask task
Language I am using the model on is English
The problem arises when using:
My own modified scripts: (give details below)
```
import transformers
transformers.__version__
```
>>'2.3.0'
```
from transformers import pipeline
nlp_fill = pipeline('fill-mask')
nlp_fill('Hugging Face is a French company based in <mask>')
```
>>---------------------------------------------------------------------------
>>KeyError Traceback (most recent call last)
>><ipython-input-22-d415d13f7528> in <module>
>>----> 1 nlp_fill = pipeline('fill-mask')
>> 2 nlp_fill('Hugging Face is a French company based in <mask>')
>>
>>~\Anaconda3\lib\site-packages\transformers\pipelines.py in pipeline(task, model, config, tokenizer, modelcard, **kwargs)
>> 848 # Retrieve the task
>> 849 if task not in SUPPORTED_TASKS:
>>--> 850 raise KeyError("Unknown task {}, available tasks are {}".format(task, list(SUPPORTED_TASKS.keys())))
>> 851
>> 852 framework = get_framework(model)
>> **KeyError**: "Unknown task fill-mask, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering']"
The tasks I am working on is:
My own task or dataset. Trying using Pipeline.
## To reproduce
Steps to reproduce the behavior:
from transformers import pipeline
nlp_fill = pipeline('fill-mask')
nlp_fill('Hugging Face is a French company based in <mask>')
## Expected behavior
Should have got a filled output with a score.
## Environment info
'transformers-cli' is not recognized as an internal or external command, operable program or batch file.
- `transformers` version: 2.3.0
- Platform: Windows Jupyter notebook
- Python version: 3.7.4
| 05-15-2020 09:39:27 | 05-15-2020 09:39:27 | Can you upgrade your version of transformers?<|||||>Same issue I am not able to run the command `transformer-cli env` even after installing from source @julien-c .
`transformers-cli is not recognized as an internal or external command operable program or batch file`
Just wondering if this is due to the fact of using `conda` as in latest versions it is not preferring to modify the PYTHONPATH. hence the non visibility of the command?
<|||||>Upgrading the transformers version to 2.9.1 works! |
transformers | 4,380 | closed | max_qa_length is needed for funetune on multiple-choice problems | # 🚀 Feature request
In examples/multiple-choice/run_multiple_choice.py, the max_qa_length is not implemented as that in google/Albert, which would cause trouble when funetuning RACE.
This should be easy becauce there are similar codes in codes for run_squad.py.
### Your contribution
I have done the script and get similar RACE performance compared to google/Albert.
# Current Bug
In examples/multiple-choice/utils_multiple_choice.py
```python
# From 537 lines:
# It should be =>
# inputs = tokenizer.encode_plus(
# text_a, text_b, add_special_tokens=True, max_length=max_length, pad_to_max_length=True, return_overflowing_tokens=True
# )
# Or the log info below would never be activated.
inputs = tokenizer.encode_plus(
text_a, text_b, add_special_tokens=True, max_length=max_length, pad_to_max_length=True,
)
if "num_truncated_tokens" in inputs and inputs["num_truncated_tokens"] > 0:
logger.info(
"Attention! you are cropping tokens (swag task is ok). "
"If you are training ARC and RACE and you are poping question + options,"
"you need to try to use a bigger max seq length!"
)
```
| 05-15-2020 09:20:08 | 05-15-2020 09:20:08 | Indeed, you're totally correct. Thank you for reporting the issue, I'm fixing it in #4385. |
transformers | 4,379 | closed | MNLI finetuning results affected by values of max_steps | # 🐛 Bug
If you follow my bash script and change MAX from 40000 to 50000, the performance is dramatically affected. However, 50000 may not be the smallest number that have this behavior and I am not sure if this is a bug or something expected; the performance is severely affected, so I decided to report.
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Set MAX=40000, Run my provided code
2. Set MAX=50000, Run my provided code
3.
```
export GLUE_DIR=downstream_datasets/glue
export TASK_NAME=QQP
SAVING_PERIOD=1000
MAX=40000
BS=32
SEED=42
#EPOCH=1
export CUDA_VISIBLE_DEVICES=0
for TASK_NAME in MNLI # QQP SST-2
do
echo $TASK_NAME
python run_glue.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--task_name $TASK_NAME \
--do_train \
--save_steps $SAVING_PERIOD \
--logging_steps $SAVING_PERIOD \
--max_steps $MAX \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_train_batch_size $BS \
--per_gpu_eval_batch_size $BS \
--learning_rate 2e-5 \
--seed $SEED \
--output_dir model_output/"$TASK_NAME"_maxlen128_seed"$SEED"_savesteps"$SAVING_PERIOD"_maxsteps"$MAX"/ \
--logging_dir runs/"$TASK_NAME"_maxlen128_seed"$SEED"_savesteps"$SAVING_PERIOD"_maxsteps"$MAX"/ \
--logging_first_step \
--evaluate_during_training
done
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
From my limited experiments, in the tensorboard dashboard:
MAX=40000: the acc will keep climbing.
MAX=50000: the acc will fluctuate throughout training. It can not exceed 40%.
I suspect MNLI's performance is affecteed by learning rate set by optimizer and scheduler, which are affected by the max_steps. However, this behaviour **does not** hold on other similar-sized datasets like QQP and SST-2. This inconsistency is confusing.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-5.3.0-46-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: single gpu
| 05-15-2020 07:03:33 | 05-15-2020 07:03:33 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,378 | closed | Implement SuperGLUE tasks and baselines | WIP implementation of the SuperGLUE tasks, following the style of the GLUE data processors. This PR includes:
- classification with marked spans: most tasks are classification and straightforward, but WiC and WSC have special spans. This PR implements `*SpanClassification` models (currently for just BERT and RoBERTa) and relevant data classes
- scripts for using and extracting results from the [experiment impact tracker](https://github.com/Breakend/experiment-impact-tracker) library, which tracks energy usage and is being used for the SustaiNLP 2020 competition
TODO
- clean up some of my development scripts and cruft | 05-15-2020 04:33:10 | 05-15-2020 04:33:10 | Hi @W4ngatang did you see that we have a new `Trainer` class in the library? It's basically a refactor of the previous version of `run_glue.py` so should be pretty easy to adopt.
Let us know if we can help, and very excited to have an easy path to SuperGLUE tasks in the repo :)<|||||>Hi Julien,
Saw it! I started writing this a while ago, but you all move very fast. We're preparing this as some reference code for a competition, so we may keep the training code as is for a bit and then switch to the trainer when we're ready to actually merge into the main repo.<|||||>@thomwolf <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=h1) Report
> Merging [#4378](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90f4b2452077ac3bac9453bdc63e0359aa4fe4d2&el=desc) will **decrease** coverage by `1.47%`.
> The diff coverage is `22.83%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4378 +/- ##
==========================================
- Coverage 78.00% 76.52% -1.48%
==========================================
Files 138 139 +1
Lines 23766 24408 +642
==========================================
+ Hits 18539 18679 +140
- Misses 5227 5729 +502
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `83.36% <12.12%> (-4.14%)` | :arrow_down: |
| [src/transformers/data/metrics/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3MvX19pbml0X18ucHk=) | `16.66% <12.35%> (-10.00%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `83.80% <19.44%> (-10.99%)` | :arrow_down: |
| [src/transformers/data/processors/superglue.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3VwZXJnbHVlLnB5) | `21.73% <21.73%> (ø)` | |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `35.59% <84.61%> (+7.96%)` | :arrow_up: |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.21% <100.00%> (ø)` | |
| [src/transformers/data/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | |
| [src/transformers/data/processors/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvX19pbml0X18ucHk=) | `100.00% <100.00%> (ø)` | |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.44% <100.00%> (ø)` | |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=footer). Last update [90f4b24...576ecaf](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks super-cool, especially with SustaiNLP just around the corner! Is there any help that could be given that would accelerate this PR?<|||||>hi @W4ngatang @thomwolf
any help you need for this PR? I think pushing this PR through can help many researchers with their experiments. <|||||>Hi Tianyi,
Yes, help would be great! The code is correct from my end, but uses an old
version of Transformers. At this point, it's just a matter of merging the
most recent version of Transformers in and adhering to the Transformers
contributor guidelines.
Best,
Alex
On Thu, Oct 8, 2020 at 5:17 PM Tianyi <[email protected]> wrote:
> hi @W4ngatang <https://github.com/W4ngatang> @thomwolf
> <https://github.com/thomwolf>
>
> any help you need for this PR? I think pushing this PR through can help
> many researchers with their experiments.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/4378#issuecomment-705828560>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABKDWG7FSDKTCVL4MI6TXEDSJYT7DANCNFSM4NBH5EJA>
> .
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 4,377 | closed | Added README huseinzol05/t5-base-bahasa-cased | 05-15-2020 04:21:12 | 05-15-2020 04:21:12 | ||
transformers | 4,376 | closed | Error on instantiating pipeline.summarizer | # 🐛 Bug
## Information
The problem arises when using:
I'm just testing out the different modules in the package. and following the examples on the [usage page](https://huggingface.co/transformers/usage.html)
## To reproduce
Run the following snippet with the environment set up as. described in the relevant section below
`from transformers import pipeline`
`summarizer = pipeline("summarization")`
<!-->
This is the traceback:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
----> 1 summarizer = pipeline("summarization")
~/Projects/amlg/playground/venv/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, **kwargs)
1759 model = model_class.from_pretrained(model, config=config, **model_kwargs)
1760
-> 1761 return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs)
~/Projects/amlg/playground/venv/lib/python3.7/site-packages/transformers/pipelines.py in __init__(self, model, tokenizer, modelcard, framework, task, args_parser, device, binary_output)
392
393 # Update config with task specific parameters
--> 394 task_specific_params = self.model.config.task_specific_params
395 if task_specific_params is not None and task in task_specific_params:
396 self.model.config.update(task_specific_params.get(task))
AttributeError: 'NoneType' object has no attribute 'config'
```
## Environment info
- `transformers` version: 2.9.1
- Platform: Darwin-19.3.0-x86_64-i386-64bit
- Python version: 3.7.3
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: Nope
- Using distributed or parallel set-up in script?: Nope
| 05-15-2020 02:31:43 | 05-15-2020 02:31:43 | I am also facing the same problem<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,375 | closed | [docs] Restore examples.md symlink | 05-15-2020 00:43:22 | 05-15-2020 00:43:22 | Doesn't work right now as links are relative links. We should update them to global links before, I can take care of it. |
|
transformers | 4,374 | closed | Speed on various cards | This is not a bug, but I wonder if someone know what is the speed difference between different cards.
I now run bert base on 1080.
How much speedup will I get using 1080 ti and 2080. Which one is faster for Transformers?
I know 2080 ti would be the best but probably the best one I can get would be 1080 ti or 2080. | 05-14-2020 22:09:48 | 05-14-2020 22:09:48 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,373 | closed | Specify tensorboard logging in BART finetuning | Allows you to specify the directory in which to store tensorboard logs when running the BART finetuning script.
Issue: #4349
| 05-14-2020 21:58:31 | 05-14-2020 21:58:31 | You may also need to rebase to fix `check_code_quality`.
Also `run_examples_torch` will only pass if it doesn't add new dependencies.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,372 | closed | Allow for None gradients in GradientAccumulator. | The [TFTrainer commit](https://github.com/huggingface/transformers/commit/aad50151f35b934039c455242c433a24f011cc93#diff-2a765415189cf1edda4414e1758b0e79) seems to have reverted the ability for gradients to be None in GradientAccumulator. This behavior is necessary when pretraining a model, such as AlbertForPreTraining, and using only one loss (only SOP or only MLM).
The [OpenNMT repository](https://github.com/OpenNMT/OpenNMT-tf/blob/master/opennmt/optimizers/utils.py) the code was taken from lacks this ability, but it's important. This commit restores the ability for gradients to be None. | 05-14-2020 21:55:15 | 05-14-2020 21:55:15 | |
transformers | 4,371 | closed | Save pretrained MarianTokenizer | Hello!
I can't save pretrained MarianTokenizer, it's raising NotImplementedError as a subclass of PretrainedTokenizer.
| 05-14-2020 19:26:43 | 05-14-2020 19:26:43 | Also, how can I transfer to cuda `BatchEncoding` object which return after call `prepare_translation_batch` method on `MarianTokenizer`<|||||>This could be fixed by specifiying a `save_vocabulary` function, like it is used in:
https://github.com/huggingface/transformers/blob/7defc6670fa76e857109e1b99f3e919da8d11f42/src/transformers/tokenization_xlm_roberta.py#L292-L311
(both source and target vocab needs to be written),
Additionally, it would be awesome if the `MarianTokenizer` class would be `pickle`-able using `__getstate__` and `__setstate__` methods :)
/cc @sshleifer <|||||>@NonaryR `BatchEncoding.to(device:str)` should work.
Will fix vocab saving, great catch!
<|||||>Thank you, `.to(device)` helped!
I can close an issue if you want it.
Also, many thanks for this new release with all these new awesome translation models, they will bring significant changes to the community!!<|||||>But how can you load the saved MarianTokenizer from your local directory?
Can anyone give a complete example about how to save and load MarianTokenizer?<|||||>Can anyone help with this issue: #5040 ? |
transformers | 4,370 | closed | run_squad with early stopping on a validation set | Hello.
Is there a way to use run_squad with early stopping as a validation set?
I have 3 files: train-v1.1.json, dev-v1.1.json, and test-v1.1.json.
I want to train on the train file, stop the training when the loss on the dev file starts to increase, and then do the final prediction and answers output on the test set.
At Keras it's pretty straight forward:
`history = model.fit(X_train, y_train, validation_data=(X_dev, y_dev), epochs=4000, verbose=0, callbacks=[EarlyStopping(monitor='dev_loss', mode='min'])`
How can I have a similar functionallity in huggingface's run_squad? I'm asking whether is there a built in way before implementing it from scratch.
I thought of changing the code in the point of:
`--evaluate_during_training` - it activates the evaluation during training - I can change the fie to the dev-v1.1.json file in this point, and stop the training if the validation loss increases.
I would like to know if there is a better way.
Thanks | 05-14-2020 19:09:46 | 05-14-2020 19:09:46 | Hi, this would be something you would need to build yourself.
Alternatively you could use our Layers as Keras Layers? |
transformers | 4,369 | closed | demo website: i info icon should link to resource about parameters | The website/demo at https://transformer.huggingface.co does have a "Model & decoder settings" box at the bottom left.
Next to it, at the right, there is an "(i)" sign.
STR: Click on it.
**What happens:** It links to this repo.
**What should happen:** It should link to somewhere, where the parameters are actually explained.
Reason:
The generic link to this repo is way to – well – generic. I've searched the Readme for the parameter names and could not find anything useful/an explanation of what they actually do.
The reason I click on that icon is to know what I can adjust there/what happens if I change an option, I don't just want to see the general project.
There should be a page that just explains, what each option means (in simple terms):
* Model size: what model to use (from small to big), the bigger the more useful/accurate(?) it may be
* Top p:
* Temperature:
* Max time: how long the model should – at most – compute to search for results | 05-14-2020 18:24:13 | 05-14-2020 18:24:13 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi @rugk, open to any suggestion for a better documentation link – e.g. should we link to https://huggingface.co/blog/how-to-generate ?
Maybe https://medium.com/huggingface/how-to-write-with-transformer-5ee58d6f51fa ?
Or to https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313#79c5 like on https://convai.huggingface.co/
cc @patrickvonplaten @sgugger <|||||>I dunno. My complaint was just an UX complaint from a user with less scientific background, so linking to very in-deep document is likely bad.
How you solve this hmm?
Maybe just write a new doc/small wiki entry here on GitHub or so?
Though if I check your links the "Advanced settings" section in https://medium.com/huggingface/how-to-write-with-transformer-5ee58d6f51fa It does what I want: explains the parameter.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>shh #badbot :no_bell::robot::no_bell: This is still a TODO.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,368 | closed | past functionality broken in release 2.9.0 and 2.9.1 | # 🐛 Bug
## Information
Model: gpt2
## To reproduce
Steps to reproduce the behavior:
```
from transformers.tokenization_gpt2 import GPT2Tokenizer
from transformers.modeling_gpt2 import GPT2LMHeadModel
import torch
# Remember to run transformers with latest master (not release 2.5.1)
tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<|endoftext|>')
model = GPT2LMHeadModel.from_pretrained('gpt2')
# Complete phrases are: "I like to drink soda without sugar" and "Go watch TV alone, I am not going"
doc = "I like to"
# note: comment the above line and uncomment the following line to make it work with 1 document
docs_tensors = tokenizer.batch_encode_plus([doc], pad_to_max_length=True, return_tensors='pt')
docs_next = [" soda and ", " with this"]
# note: comment the above line and uncomment the following line to make it work with 1 document
docs_next_tensors = tokenizer.batch_encode_plus(
[d for d in docs_next], pad_to_max_length=True, return_tensors='pt')
# predicting the first part of each phrase
_, past = model(docs_tensors['input_ids'], attention_mask=docs_tensors['attention_mask'])
# manipulating the past
past_expanded = [torch.repeat_interleave(layer, torch.LongTensor([2]), dim=1) for layer in past]
past_attention_mask = torch.ones(docs_next_tensors['attention_mask'].shape[0], len(docs_tensors['input_ids'][0]), dtype=torch.int64)
attn_mask = torch.cat([past_attention_mask, docs_next_tensors['attention_mask']], dim=-1)
# predicting the rest of the phrase with past
logits, _ = model(docs_next_tensors['input_ids'], attention_mask=attn_mask, past=past_expanded)
logits = logits[:, -1]
_, top_indices_results = logits.topk(50)
words = [tokenizer.decode([idx.item()]) for tir in top_indices_results for idx in tir]
print("Predictions for:", [doc + n for n in docs_next])
print("Results with past:", words)
#####################
docs_full_tensors = tokenizer.batch_encode_plus(
[doc + n for n in docs_next], pad_to_max_length=True, return_tensors='pt')
logits, _ = model(docs_full_tensors['input_ids'], attention_mask=docs_full_tensors['attention_mask'])
logits = logits[:, -1]
_, top_indices_results = logits.topk(50)
words = [tokenizer.decode([idx.item()]) for tir in top_indices_results for idx in tir]
print("Predictions for:", [doc + n for n in docs_next])
print("Results without past:", words)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I am expecting the past functionality to work. But I'm getting the following error:
```
RuntimeError: The size of tensor a (4) must match the size of tensor b (5) at non-singleton dimension 3
```
This was working correctly up to release 2.8.0
## Environment info
- `transformers` version:
- Platform: Macos
- Python version: 1.4.0 / 1.5.0
- PyTorch version (GPU?): 2.9.0 cpu / 2.9.1 cpu
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 05-14-2020 18:23:13 | 05-14-2020 18:23:13 | Hi @Damiox,
I think I see the problem. That's quite an edge case here. Since `2.8.0` (here the PR: https://github.com/huggingface/transformers/pull/3734), the `input_ids` will always be cut to just the last token when using `past` to have a consistent API with other models and allow to keep putting in the whole `input_ids` when speeding up decoding. Also `past` was never supposed to be used together with `input_ids` longer than 1, so I did not think of this edge case :-/. I'm quite surprised that this even worked before, but I'm also not sure whether the `causal mask` in GPT2 was correct for this case before.
I think a simple fix in your case would be to generate the output incrementally and only use `input_ids` of length 1 or not use the `past` variable at all here.
@LysandreJik, looking at this script, I think I did introduce a breaking change in the PR: https://github.com/huggingface/transformers/pull/3734 for this edge case when:
`past` is given to the model **and** `input_ids` is longer than 1. It's quite an edge case here and I'm not sure whether it's worth reversing the PR. What do you think?
<|||||>Hi @patrickvonplaten - I'm a bit worried about this... I cannot apply the simple fix because I'm unable to get rid of `past` variable or run more inferences at this point because the performance would be affected for my use case which is over a realtime bigdata dataset. The current results are accurate and are based on discussions in https://github.com/huggingface/transformers/issues/3095 - this is already deployed.
Just to be clear... Will this not be fixed?<|||||>That's an interesting use-case indeed. I don't believe we ever intended the `past` to be used with `input_ids` larger than one, and I'm quite surprised as well that it did work.
@thomwolf, do you have any opinion on this?<|||||>I'm really looking forward to knowing if this can be fixed so to continue working as it was working up to v2.8.0. Any new thoughts on this? Thanks<|||||>Hi @Damiox,
Sorry to answer so late. We had some discussion internally about it. And you are correct, we should revert the PR I merged earlier. This is done in #4581 and should be included in the next version. @thomwolf @LysandreJik |
transformers | 4,367 | closed | Fix: unpin flake8 and fix cs errors | 05-14-2020 16:34:14 | 05-14-2020 16:34:14 | ||
transformers | 4,366 | closed | [pipelines] Failing @slow test for TF Summarization | Command:
```bash
RUN_SLOW=1 pytest tests/test_pipelines.py --durations 0 -s -k tf_defaults
```
Issue caused by trying to use Bart in TF.
| 05-14-2020 16:28:10 | 05-14-2020 16:28:10 | |
transformers | 4,365 | closed | Has anyone successfully used TF2.0 to load pre-trained transformer-XL (wt103) weights and reproduce their sota results | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I tried to reproduce, but the final result was 35.6 (vs paper 18.3), I carefully checked and found that the matrix multiplication result of TF2.x and TF1.12 (paper use) have some differences, about 0.0001, and then Many layers of linear transformation and multiplication, the difference in output characterization of the last layer has reached about 0.1, which is enough to affect the results, and if the mem_len and tgt_len values in the experiment are swapped, ppl actually drops to 19.5, I can ensure that all loaded The parameters are correct, the code is completely referenced to the author's source code and changed to 2.0 version, so I want to ask if anyone has reproduced it with TF2.0
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 05-14-2020 16:08:34 | 05-14-2020 16:08:34 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,364 | closed | Can not reproduce article numbers | # 🐛 Bug
https://github.com/huggingface/transformers/tree/master/examples/summarization/bertabs
## Information
Bertabs Rouge1/2 F1 evaluation numbers that I am getting are much less than in their article. About twice less!
Model I am using (Bert, XLNet ...):
Bertabs
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Summarization
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 05-14-2020 15:32:44 | 05-14-2020 15:32:44 | These are numbers that I am getting when running the examples (in the article or here: https://github.com/nlpyang/PreSumm) they mention much higher numbers:
****** ROUGE SCORES ******
** ROUGE 1
F1 >> 0.195
Precision >> 0.209
Recall >> 0.186
** ROUGE 2
F1 >> 0.094
Precision >> 0.104
Recall >> 0.089
** ROUGE L
F1 >> 0.221
Precision >> 0.233
Recall >> 0.212<|||||>same here. Also getting ROUGE scores in the same ball-park<|||||>Well, I already switched to Bart model. I do not know what is the reasons for low numbers here, possibly some bugs in the porting, but in general BertAbs model does not seem the best approach for it.<|||||>Alex, by the way I see that you are currently working on Summarization topic too. If you like you can write me directly to [email protected] (I am currently conversational AI researcher at Nvidia) and we can have more detailed discussion on this topic.<|||||>Will do, thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,363 | closed | Fix trainer evaluation | Fix eval loss calculation in Trainer for models with `lm_labels` parameter, related to issue #4361
Fix trainer crash at eval time on TPU, related to issue #4362
| 05-14-2020 14:39:30 | 05-14-2020 14:39:30 | Thanks for catching this, @patil-suraj! |
transformers | 4,362 | closed | Trainer crashes on TPU at eval time when prediction_loss_only is True | # 🐛 Bug
The trainer crashes on TPU at eval time when `prediction_loss_only` is `True`. Here's the stack trace
```
ValueError: zero-dimensional arrays cannot be concatenated
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 630, in evaluate
output = self._prediction_loop(eval_dataloader, description="Evaluation")
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 630, in evaluate
output = self._prediction_loop(eval_dataloader, description="Evaluation")
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 707, in _prediction_loop
preds = xm.mesh_reduce("eval_preds", preds, np.concatenate)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 707, in _prediction_loop
preds = xm.mesh_reduce("eval_preds", preds, np.concatenate)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 670, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 670, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
File "<__array_function__ internals>", line 6, in concatenate
File "<__array_function__ internals>", line 6, in concatenate
```
This is because `preds` and `label_ids` are `None` when `prediction_loss_only` is `True`. But its not checked here https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L706
```
if is_tpu_available():
# tpu-comment: Get all predictions and labels from all worker shards of eval dataset
preds = xm.mesh_reduce("eval_preds", preds, np.concatenate)
label_ids = xm.mesh_reduce("eval_out_label_ids", label_ids, np.concatenate)
```
This only checks of TPU is available and doesn't check if `preds` and `label_ids` are `None`
## Information
Model I am using (Bert, XLNet ...): T5
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Set `prediction_loss_only` only to `True` while initialising `Trainer` or in `.evaluate` method
2. call `trainer.evaluate`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The trainer should not crash at eval time on TPU when `prediction_loss_only` is `True`. `preds`
and `label_ids` should be checked for null.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1, master branch
- Platform: Colab TPU
- Python version: 3.6.9
- PyTorch version (GPU?): '1.6.0a0+96885f7' (false)
- Tensorflow version (GPU?): Not applicable
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes, colab TPU | 05-14-2020 14:30:39 | 05-14-2020 14:30:39 | By the way we're hoping to soon have TPU-specific CI in place, cc @LysandreJik |
transformers | 4,361 | closed | Trainer doesn't calculate eval loss for models with lm_labels parameter | # 🐛 Bug
The trainer doesn't calculate evaluation loss for models with `lm_labels` parameters in the `forward` method. This line https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L682 is the source of the bug
`has_labels = any(inputs.get(k) is not None for k in ["labels", "masked_lm_labels"]) `
loss is only calculated when `has_labels` is `True` and its only `True` for models with `labels` and `masked_lm_labels` parameters and not for `lm_labels`
## Information
Model I am using (Bert, XLNet ...): T5
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Use any model with `lm_labels` parameter in the `forward` method, ex T5 or BART and evaluate using trainer
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Evaluation loss should be calculated for models with `lm_labels` parameter as well.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: Python 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101
- Tensorflow version (GPU?): Not applicable
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 05-14-2020 14:30:32 | 05-14-2020 14:30:32 | |
transformers | 4,360 | closed | LayerNorm not excluded from weight decay in TF | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
bert-base-cased
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Add a print statement to `_do_use_weight_decay` in [AdamWeightDecay](https://github.com/huggingface/transformers/blob/master/src/transformers/optimization_tf.py) to see which parameters are actually excluded:
```python
def _do_use_weight_decay(self, param_name):
"""Whether to use L2 weight decay for `param_name`."""
if self.weight_decay_rate == 0:
return False
if self._include_in_weight_decay:
for r in self._include_in_weight_decay:
if re.search(r, param_name) is not None:
return True
if self._exclude_from_weight_decay:
for r in self._exclude_from_weight_decay:
if re.search(r, param_name) is not None:
print(f"Found: {param_name}")
return False
return True
```
2. run `python examples/text-classification/run_tf_glue.py --model_name_or_path bert-base-cased --task_name mrpc --output_dir temp --logging_dir temp --do_train --overwrite_output_dir --optimizer_name adamw`.
3. Observe that no weights related to layer norms are printed.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The weights of the layer norms (and the biases) should be printed.
See for example: https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/optimization.py.
Based on the fact that no layer norm weights are printed with "layer_norm" simply switching "layer_norm" to "LayerNorm" seems like the easiest change.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.0
- Platform: Darwin-19.4.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-14-2020 13:25:36 | 05-14-2020 13:25:36 | Thanks for the issue dude! Feel free to open a PR if you have a branch that fixed that lying around haha ;-) |
transformers | 4,359 | closed | Create README.md | 05-14-2020 12:45:48 | 05-14-2020 12:45:48 | Added language metadata for the model to appear on https://huggingface.co/models?filter=turkish |
|
transformers | 4,358 | closed | Trainer and Colab TPU: Training loss isn't declining | # 🐛 Bug
When using the Trainer with a Colab TPU it seems like the training loss remains more or less constant around 1.11 during the whole training process. With Colab GPU/CPU however it you get the expected loss decline.
## Information
Model I am using (Bert, XLNet ...):Bert Base Cased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
When you run `run_glue.py` with `xla_spawn.py` as shown in Running on TPUs section in the examples README.
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
The problem is reproduced in this Colab notebook:
https://colab.research.google.com/drive/1PLPReml5UOHC3iQdx8QvfCxuhDekTD2l?usp=sharing
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The training loss declines the same way it does when using GPU/CPU.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0a0+96885f7 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-14-2020 12:32:04 | 05-14-2020 12:32:04 | I meeting the same issue with a custom training script (built using GLUE script as model), using ELECTRA for classification task.<|||||>Hi! This should have been solved by https://github.com/huggingface/transformers/pull/4450.
Could you install from source and let me know if it fixes your issue?<|||||>The code is running, but I receive this warning :
```
UserWarning: This overload of addcdiv_ is deprecated:
addcdiv_(Number value, Tensor tensor1, Tensor tensor2)
Consider using one of the following signatures instead:
addcdiv_(Tensor tensor1, Tensor tensor2, *, Number value) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:760.)
p.data.addcdiv_(-step_size, exp_avg, denom)
```
and it's **very slow** (2h for 1 epoch).
On a K80 GPU, it takes 30 minutes...
---
Running on 8 cores is not working but I think this is due to the RAM limitations.<|||||>Can confirm that the training loss is now declining as expected in colab notebook and that one epoch takes around 2h per epoch with 1 TPU-core. I also tested with a K80 but unlike for @Colanim it took around 5h per epoch, which I find reasonable in the sense that the TPU is faster. The batch size was the default Trainer value for both tests (which is 5 I believe).
I get @Colanim's UserWarning regardless of using GPU or TPU.
So imo the bug is fixed! |
transformers | 4,357 | closed | Create README.md | 05-14-2020 10:47:46 | 05-14-2020 10:47:46 | Great! I suspect Exbert won't work out of the box but we are looking into enabling it automatically |
|
transformers | 4,356 | closed | GPT2 checkpoint breaks on new transformers version (2.9.1) | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2
Language I am using the model on (English, Chinese ...): SMILES
## Description
When trying to load a gpt2 checkpoint from before 2.9.1, it loads fine.
Checked on 2.8.0 and it works fine.
Then on 2.9.1 I can't load the model and I get this error:
```
RuntimeError: Error(s) in loading state_dict for LMWrapper:
Missing key(s) in state_dict: "model.transformer.h.0.attn.masked_bias", "model.transformer.h.1.attn.masked_bias", "model.transformer.h.2.attn.masked_bias", "model.transformer.h.3.attn.masked_bias", "model.transformer.h.4.attn.masked_bias", "model.transformer.h.5.attn.masked_bias".
```
LMWrapper is just a wrapper around the GPT2 Model, which is named "model" in self
- `transformers` version: 2.9.1
- Platform: Linux and Mac
- Python version: 3.7
- PyTorch version (GPU?): 1.4.0 GPU and CPU
- Tensorflow version (GPU?): None
- Using GPU in script?: Yes and No
- Using distributed or parallel set-up in script?: No
| 05-14-2020 10:24:44 | 05-14-2020 10:24:44 | Do you mind giving us a reproducible code example so that we may check on our end?<|||||>I have the same issue, my trained models work on 2.8.0 but I get the same error when loading saved model on 2.9.1<|||||>Hi, sorry for not getting back to this @LysandreJik .
It seems that in Transformers 2.8.0 and below, there is no parameter called `masked_bias`.
However in 2.9+, there is now a parameter called `masked_bias` on line 109 in `modeling_gpt2.py`: https://github.com/huggingface/transformers/blob/v2.9.0/src/transformers/modeling_gpt2.py#L109
Thus, the error above is seen.
Could anything be done about this? Otherwise it renders all GPT2 checkpoints before 2.8.0 unusable, unless manually tinkering with the checkpoint to add this parameter?<|||||>Hi! I'm trying to reproduce, but I don't get an error like you do. These values indeed do not get loaded, but this does not raise a `RuntimeError`.
Do you mind giving me a short code sample so that I may debug on my side? Thanks a lot.<|||||>The weights are all proprietary unfortunately but let me try and do something.<|||||>@LysandreJik
First, `pip install transformers==2.8.0`
Then do:
```
import torch
from transformers import GPT2Config, GPT2Model
torch.save(GPT2Model(GPT2Config(n_layer=2)), "gpt2_state_dict.pt")
```
Then, `pip install transformers==2.9.0`
and do
```
import torch
from transformers import GPT2Config, GPT2Model
gpt2 = GPT2Model(GPT2Config(n_layer=2))
state_dict = torch.load("gpt2_state_dict.pt")
gpt2.load_state_dict(state_dict)
```
And you get an error:
```
RuntimeError: Error(s) in loading state_dict for GPT2Model:
Missing key(s) in state_dict: "h.0.attn.masked_bias", "h.1.attn.masked_bias".
```<|||||>I see! Indeed, there's not much we can do about that, unfortunately. We have our own loading/saving method for that purpose.
Out of curiosity, why don't you use `from_pretrained`/`save_pretrained`?<|||||>I'm wrapping the model in a Pytorch Lightning module, and the most convenient way to save that is to save the state dictionary.<|||||>In that case, you could have a workaround like:
```py
import torch
from transformers import GPT2Config, GPT2Model
# Use the from_pretrained method
temp = GPT2Model.from_pretrained(directory)
temp.save_pretrained(directory)
gpt2 = GPT2Model(GPT2Config(n_layer=2))
state_dict = torch.load(f"{directory}/pytorch_model.pt")
gpt2.load_state_dict(state_dict)
```
Loading/saving using `from_pretrained` shouldn't raise an error, and once it's saved you can then load the state dict like you did. Once you have saved it using this, you can always just load the state dict.
<|||||>@LysandreJik yes, that seems like a good workaround. Thanks!<|||||>Glad I could help, sorry for taking so long to get back to you. Feel free to reopen if you face any other issues down the road. |
transformers | 4,355 | closed | DistilGPT2 Finetuning with GPU | Hi, i'm a beginner in this field and would like to finetune the model DistilGPT2 using my laptop. I have an Nvidia 1660Ti and followed the guide example
```
export TRAIN_FILE=/path/to/dataset/wiki.train.raw
export TEST_FILE=/path/to/dataset/wiki.test.raw
python run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
```
But during the finetuning my GPU stays at 0% and all the work is done by the CPU, is there a way to force the GPU to finetune? | 05-14-2020 09:22:21 | 05-14-2020 09:22:21 | It should be automatic.
What does torch.cuda.is_available say?<|||||>torch.cuda.is_available return False.
This is what my setup is by running nvidia-smi
```
Thu May 14 15:04:38 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 431.87 Driver Version: 431.87 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 166... WDDM | 00000000:01:00.0 Off | N/A |
| N/A 57C P8 3W / N/A | 153MiB / 6144MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```<|||||>Yes just checked. torch is not able to find cuda<|||||>You should upgrade your nvidia driver.
You probably installed pytorch 1.5.0 bumped up the required version of nvidia-driver.<|||||>yes.
`pip install torch==1.4.0`
will solve the issue. |
transformers | 4,354 | closed | Not getting expected results for freshly-pretrained BERT | # ❓ Questions & Help
I have a freshly trained BERT (using the original BERT) which doesn't seem to function as expected after moving it to PyTorch/Transformers. I don't have any syntax errors anywhere. So I am not sure how to go about understanding the bug. Wondering if anyone has suggestions for me.
Below I am providing details on every single step of my training process.
Here is a comment from my training:
```
2020-05-14 08:36:41,144 : Saving checkpoints for 1875000 into gs:// .... /perbert_L-12_H-768_A-12/model.ckpt.
2020-05-14 08:37:07,703 : loss = 1.7460283, step = 1875000 (1664.300 sec)
2020-05-14 08:37:07,704 : global_step/sec: 15.0213
2020-05-14 08:37:07,704 : examples/sec: 1922.73
2020-05-14 08:37:07,705 : Enqueue next (25000) batch(es) of data to infeed.
2020-05-14 08:37:07,705 : Dequeue next (25000) batch(es) of data from outfeed.
```
As you can see, the loss function is pretty low after being trained for about 180k steps.
Now I had it converted with `convert_tf_checkpoint_to_pytorch`:
```python
from pytorch_pretrained_bert.convert_tf_checkpoint_to_pytorch import convert_tf_checkpoint_to_pytorch
model = "perbert_L-12_H-768_A-12"
checkpoint = "1875000"
convert_tf_checkpoint_to_pytorch(
f'/models/{model}/model.ckpt-{checkpoint}.index',
bert_config_file=f'models/{model}/bert_config.json',
pytorch_dump_path=f"models/{model}/saved_model_{checkpoint}")
```
Then I load the results in masked language model to see whether it does something reasonable or not:
```python
import torch
from transformers import BertConfig, BertTokenizer, BertForMaskedLM
dir = "/Users/danielk/ideaProjects/farsi-language-models/src/models/perbert_L-12_H-768_A-12/"
checkpoint = 1875000
tokenizer = BertTokenizer.from_pretrained(dir)
config = BertConfig.from_json_file(dir + '/bert_config.json')
model = BertForMaskedLM.from_pretrained(dir + '/saved_model_' + str(checkpoint), config=config)
string = "بزرگترین اقتصاد دنیا در حال فروپاشی موقت است. این امر مساله ای کاملا طبیعی است"
tokens = tokenizer.tokenize(string)
print("tokens: " + tokens)
print("indices: " + tokenizer.encode(string))
input_ids = torch.tensor([tokenizer.encode(string)])
outputs = model(input_ids)
predictions = outputs[0]
for masked_index, _ in enumerate(tokens):
print(" * input string: " + tokens[masked_index])
predictions_numpy = predictions[0, masked_index].cpu().data.numpy()
predictions_with_index = list(enumerate(predictions_numpy))
predictions_with_index = sorted(predictions_with_index, key=lambda x: -x[1])
top_indices = [x for x in predictions_with_index[:10]]
predicted_tokens = tokenizer.convert_ids_to_tokens([x[0] for x in top_indices])
print(predicted_tokens)
print("...")
```
The output of the tokenization is correct; the tokens are reasonable and they're mapped to the right indices:
```
tokens: ['بزرگترین', 'اقتصاد', 'دنیا', 'در', 'حال', 'فروپاشی', 'موقت', 'است', '[UNK]', 'این', 'امر', 'مساله', 'ای', 'کاملا', 'طبیعی', 'است']
indices: [2, 989, 499, 655, 6, 166, 16459, 3756, 13, 1, 11, 1312, 3703, 36, 1089, 1070, 13, 3]
```
However, the output of the masking is not as expected:
```
* input string: بزرگترین
['Plugin', '##AN', '##AZ', 'ورزشکاران', 'خانوارها', 'items', 'Moha', 'بکنید', '##Style', 'سهیلی']
...
* input string: اقتصاد
['استان', 'آنها', 'نماد', 'বন', 'ویتنام', 'محكوم', '##جاری', '##sc', 'public', 'آلبرت']
...
* input string: دنیا
['##Lang', 'public', 'آی', 'یافتهاند', 'check', 'یادتون', 'ضریب', 'رصدخانه', 'تزئینات', 'رف']
...
* input string: در
['##Lang', 'Plugin', 'বন', '##BX', 'گرن', 'مغز', 'منجمد', 'فضل', 'سرمایشی', 'public']
...
* input string: حال
['##Lang', 's', 'تمهیدات', 'خخخ', 'Product', 'حلزون', '##ten', '##تیپ', '##sc', 'آی']
...
* input string: فروپاشی
['##Pack', 'বন', 'باشد', 'عجایب', '##File', '##Lang', '##column', 's', '##BG', '334']
...
* input string: موقت
['بن', 'عاشورایی', 'تایباد', 'Vivo', 'بلوغ', '##dani', 'اشتیاق', '1351', '[UNUSED_194]', '##أفادت']
...
* input string: است
['85', 'مشار', '##گرایان', '##جاری', 'while', '##cause', 'ژیل', 'مولد', 'نقلیه', 'رحم']
...
* input string: [UNK]
['آنها', 'چسبندگی', 'سرمایشی', '##یس', 'کلی', 'آی', 'آزمایشی', '##گ', 'مولد', 'سینماهای']
...
* input string: این
['اسپانیایی', '##زآپ', 'سهیلی', 'رسیدند', 'استان', 'بن', 'کریر', '##پد', 'تایباد', '##ث']
...
* input string: امر
['ش', 'বন', '##آيند', 'اعتباری', 'نامرئی', '##ملی', 'public', 'اومد', 'یافتهاند', 'موعظه']
...
* input string: مساله
['إيران', 'বন', 'public', '##سمح', 'wp', 'نماد', 'شئون', 'website', '##تیپ', '##اعتراض']
...
* input string: ای
['مردمان', '[UNUSED_203]', 'خورده', 'public', 'لهم', 'آنها', 'کعبه', 'الهی', 'رصدخانه', 'سرمایشی']
...
* input string: کاملا
['public', '77', 'مروج', 'تایباد', '##aghi', 'منصب', '##Lang', '[UNUSED_203]', '##198', 'مغز']
...
* input string: طبیعی
['##Lang', 'Plugin', '##den', '##مفید', 'وزنه', 'دلهره', 'check', 'Vivo', 'Biz', 'خواهان']
...
* input string: است
['ju', '##Lang', 'عقربه', 'استان', 'اثرانگشت', '##aghi', 'انزلی', 'چسبندگی', '##tis', 'فرماید']
...
```
The above output that the masked language model has returned is quite random. Clearly, something is off.
I briefly inspected one of my tfrecord files, but they also seem to be reasonable; here is one example block:
```json
{
"features": {
"feature": {
"masked_lm_positions": {
"int64List": {
"value": [
"2",
"3",
"5",
"15",
"23",
"27",
"33",
"41",
"55",
"60",
"72",
"74",
"87",
"97",
"103",
"104",
"106",
"112",
"0",
"0"
]
}
},
"masked_lm_ids": {
"int64List": {
"value": [
"5601",
"33",
"4125",
"12",
"6",
"11635",
"9808",
"178",
"8",
"287",
"6",
"3213",
"6",
"23",
"8",
"814",
"613",
"12466",
"0",
"0"
]
}
},
"masked_lm_weights": {
"floatList": {
"value": [
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
0.0,
0.0
]
}
},
"segment_ids": {
"int64List": {
"value": [
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"0",
"0",
"0",
"0",
"0",
"0"
]
}
},
"input_ids": {
"int64List": {
"value": [
"2",
"2824",
"4",
"4",
"6",
"4",
"599",
"5388",
"16",
"16598",
"4513",
"5",
"2824",
"2219",
"217",
"5891",
"7",
"3117",
"4710",
"24069",
"16",
"69",
"5",
"4",
"127",
"166",
"3386",
"4",
"178",
"13",
"1",
"278",
"26085",
"5791",
"5",
"1401",
"1801",
"5",
"2348",
"127",
"905",
"4",
"13",
"1",
"2824",
"1127",
"8447",
"7144",
"18646",
"5",
"168",
"14763",
"9",
"6",
"7103",
"4",
"3878",
"1978",
"5",
"430",
"287",
"1",
"3",
"12466",
"23",
"814",
"6",
"14259",
"6",
"4426",
"46",
"453",
"4",
"707",
"4",
"46",
"6",
"4426",
"365",
"338",
"7935",
"9",
"6",
"748",
"6",
"4426",
"44",
"4",
"476",
"1",
"1",
"7",
"12466",
"10",
"12053",
"1070",
"6",
"4",
"1832",
"44",
"8012",
"7968",
"46",
"4",
"4",
"49",
"4",
"287",
"14",
"6",
"14259",
"707",
"4",
"1",
"1348",
"44",
"6849",
"62",
"6931",
"31",
"1",
"3",
"0",
"0",
"0",
"0",
"0",
"0"
]
}
},
"input_mask": {
"int64List": {
"value": [
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"0",
"0",
"0",
"0",
"0",
"0"
]
}
},
"next_sentence_labels": {
"int64List": {
"value": [
"1"
]
}
}
}
}
}
```
Here are some thoughts:
- Could the model conversion somehow go wrong without raising any errors?
- Persian is a right-to-left language; could that be a factor? In theory, it should not be an issue, since from the perspective of the either code (TF and PyTorch) these are just strings/byres sequenced after each other.
| 05-14-2020 09:02:34 | 05-14-2020 09:02:34 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,353 | closed | Fixing tokenization of extra_id symbols in T5Tokenizer | Fixing tokenization of extra_id symbols in T5Tokenizer. Related to issue #4021 | 05-14-2020 02:01:36 | 05-14-2020 02:01:36 | Awesome this looks good to me!
@LysandreJik - I tried this fix and it does fix the issue https://github.com/huggingface/transformers/issues/4021 . To me it seems like this line was just forgotten when implementing T5 tokenizer.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=h1) Report
> Merging [#4353](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/94cb73c2d2efeb188b522ff352f98b15124ba9f8&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4353 +/- ##
=======================================
Coverage 78.21% 78.21%
=======================================
Files 120 120
Lines 20038 20039 +1
=======================================
+ Hits 15673 15674 +1
Misses 4365 4365
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.49% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.08% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=footer). Last update [94cb73c...c9824f4](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi,
Is the below expected behavior? I am initializing the tokenizer with `tokenizer = T5Tokenizer.from_pretrained('t5-large', extra_ids=0, additional_special_tokens=special_tokens)` where special_tokens is a list like ['_self_say', '_partner_say', '_self_persona', '_partner_persona']. Despite specifying extra_ids=0 I see a lot of sentinel tokens in the outputs with negative ids. For example:
```
Input: _context _setting_name Tower _setting_desc The inside tower is made from a combination of wood and brick. It also has some metal to hold all the doors in place. _object a door _object a door _self_name pet dog _self_persona I am mans best friend and I wouldn't have it any other way. I tend to my master and never leave his side. I sleep at his feet and guard the room at night from things that go bump in the night. _option hug knight _option hit knight _history
Response: <extra_id_-100>,<extra_id_-99> erreichen<extra_id_-98> hit knightăminterească a door<extra_id_-97>îî<extra_id_-96> a doorăm!<extra_id_-95> timpul<extra_id_-94> pentru<extra_id_-93>itățile Tower<extra_id_-92> I am Mans best friend.<extra_id_-91> doud<extra_id_-90> casă facem<extra_id_-89> a doorșteștehlen<extra_id_-88><extra_id_-87> tower is made from different materials. The outside
```
|
transformers | 4,352 | closed | Longformer | ## Adding the Longformer model
Paper: https://arxiv.org/abs/2004.05150
The code is mostly copied from https://github.com/allenai/longformer
Addressing issue: https://github.com/huggingface/transformers/issues/3783
**TODOs**
- [x] Add unit tests
- [x] Address PR reviews
- [x] Upload model to HF instead of AllenAI s3 buckets.
**TODOs for other PRs**
- Add `LongformerForSequenceClassification`, `LongformerForMultipleChoice`, `LongformerForTokenClassification`, `LongformerForQuestionAnswering` with automatic setting of global attention based on the task.
- Add `LongformerEncoderDecoder`
- Add an example to show how to convert existing pretrained models into their long version. | 05-14-2020 00:46:56 | 05-14-2020 00:46:56 | This is awesome @ibeltagy . I´m very excited about longformer.
In the original implementation a good speedup was thanks to the personalized kernel in tvm. In this pull, this speedup is lost, or it was adapted somehow?<|||||>Thanx @ibeltagy!
Is it possible for you or someone else who is familiar with TF2 to submit also a PR for TF2? I tried to migrate your code in the past but I couldn’t migrate the new attention mechanism written for TVM, so I moved on. Your work alongside Reformer has special interest for people like me and my colleagues, who are usually doing research with long documents (>=1000words).
Maybe @thomwolf and @julien-c know the right person...
<|||||>@ibeltagy - Looks awesome the PR :-) I added some minor comments.
I think we can merge this very soon :-)
To pass the check_code_qualtiy test, you can run `make style` from the root folder.
Regarding the unit test, I think you can copy-paste a lot of the code from `test_roberta_modeling.py`. In order to skip tests for `torch_script, pruning, ..` (features we can add in another PR) you can do the same as is done in the reformer tests here: https://github.com/huggingface/transformers/blob/94cb73c2d2efeb188b522ff352f98b15124ba9f8/tests/test_modeling_reformer.py#L510 .
If possible it would also be great to add some integration tests comparing the outputs (and maybe even gradients) to those of your original model code. I think you could use the same integration test structure as was done for Roberta or Reformer.<|||||>@bratao , @iliaschalkidis, about the custom CUDA kernel. We [recently](https://github.com/allenai/longformer/pull/27/files#diff-04c6e90faac2675aa89e2176d2eec7d8R4) added an efficient PyTorch implementation of our sliding window attention (without dilation), which is faster and uses less memory (with fp16) and should be easy to port to TF. The TVM code is still needed for the dilated sliding window attention, but dilation is more important for char-lm than fine-tuning on downstream tasks (QA, classification .. etc). This PR only uses the PyTorch code.<|||||>Thanks, @patrickvonplaten. I will address your comments and let you know.
I also want to add `LongformerForSequenceClassification`, `LongformerForMultipleChoice`, `LongformerForTokenClassification`, `LongformerForQuestionAnswering` with automatic setting of global attention based on the task, so that the user doesn't need to worry about it. I will keep this to a different PR unless you think it is better to add it here. <|||||>> Thanks, @patrickvonplaten. I will address your comments and let you know.
>
> I also want to add `LongformerForSequenceClassification`, `LongformerForMultipleChoice`, `LongformerForTokenClassification`, `LongformerForQuestionAnswering` with automatic setting of global attention based on the task, so that the user doesn't need to worry about it. I will keep this to a different PR unless you think it is better to add it here.
It'd be great to add those as well :-) I think doing it in a second PR would be better as well<|||||>@patrickvonplaten, @sshleifer, I am done addressing your reviews. I still need to upload model weights but waiting to get access to the AllenAI organization account. Also, can you check the failed tests? they seem like issues with CircleCI rather than the code.
<|||||>@sshleifer, addressed your comments. Now `ci/circleci: run_tests_torch_and_tf` is timing out.<|||||>I'm sorry.
Rebase may help the timeout, but I agree that it does not appear to be caused by this code.
I'd just verify that there aren't new tests that should be decorated `@slow` (append `--durations 0` to your local pytest command) and it should be fine.<|||||>The slowest one is 0.10s.
```
0.10s call tests/test_modeling_longformer.py::LongformerModelTest::test_save_load
0.10s call tests/test_modeling_longformer.py::LongformerModelTest::test_attention_outputs
0.07s call tests/test_modeling_longformer.py::LongformerModelTest::test_resize_tokens_embeddings
0.06s call tests/test_modeling_longformer.py::LongformerModelTest::test_determinism
0.05s call tests/test_modeling_longformer.py::LongformerModelTest::test_inputs_embeds
0.04s call tests/test_modeling_longformer.py::LongformerModelTest::test_correct_missing_keys
0.04s call tests/test_modeling_longformer.py::LongformerModelTest::test_hidden_states_output
0.04s call tests/test_modeling_longformer.py::LongformerModelTest::test_longformer_model
0.03s call tests/test_modeling_longformer.py::LongformerModelTest::test_initialization
0.03s call tests/test_modeling_longformer.py::LongformerModelTest::test_longformer_for_masked_lm
0.02s call tests/test_modeling_longformer.py::LongformerModelTest::test_model_common_attributes
```
Maybe the integration tests are slower than average because of the long document? <|||||>@patrickvonplaten, @sshleifer, looks like rebasing fixed the problem. Thanks.<|||||>> ## Adding the Longformer model
> Paper: https://arxiv.org/abs/2004.05150
> The code is mostly copied from https://github.com/allenai/longformer
> Addressing issue: #3783
>
> **TODOs**
>
> * [x] Add unit tests
> * [x] Address PR reviews
> * [ ] Upload model to HF instead of AllenAI s3 buckets.
>
> **TODOs for other PRs**
>
> * Add `LongformerForSequenceClassification`, `LongformerForMultipleChoice`, `LongformerForTokenClassification`, `LongformerForQuestionAnswering` with automatic setting of global attention based on the task.
> * Add `LongformerEncoderDecoder`
> * A few small TODOs in the code
> * Add an example to show how to convert existing pretrained models into their long version.
Is there any particular timeframe by when we can have LongformerForQuestionAnswering? Would love to use it in a small project of mine.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@18d233d`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `84.48%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4352 +/- ##
=========================================
Coverage ? 78.30%
=========================================
Files ? 123
Lines ? 20373
Branches ? 0
=========================================
Hits ? 15953
Misses ? 4420
Partials ? 0
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `82.75% <82.75%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ø)` | |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `92.68% <100.00%> (ø)` | |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.57% <100.00%> (ø)` | |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.61% <100.00%> (ø)` | |
| [src/transformers/tokenization\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbG9uZ2Zvcm1lci5weQ==) | `100.00% <100.00%> (ø)` | |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.72% <0.00%> (ø)` | |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.47% <0.00%> (ø)` | |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.54% <0.00%> (ø)` | |
| ... and [120 more](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=footer). Last update [18d233d...d1c4bbc](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,351 | closed | tf add resize_token_embeddings method | Introduces the `resize_token_embeddings` method for TF models (with the exception of `Transformer-XL`).
My first commit! Thank you for the awesome library.
I often have found myself in need to extend the embeddings vectors to accommodate auxiliary data, it seems that TF models don't have the functionality. Size of the embeddings layer can be changed like following:
```
SIZE = 50000
# T5 Generator
MODEL_NAME = 't5-small'
model2 = TFT5ForConditionalGeneration.from_pretrained(MODEL_NAME)
emb1 = model2.get_input_embeddings()
emb2 = model2.resize_token_embeddings(SIZE)
assert emb2.weight.shape[0] == SIZE
np.allclose(emb1.weight[:10].numpy(), emb2.weight[:10].numpy())
# BERT
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
emb1 = model.get_input_embeddings()
emb2 = model.resize_token_embeddings(SIZE)
assert emb2.word_embeddings.shape[0] == SIZE
np.allclose(emb1.word_embeddings[:10].numpy(), emb2.word_embeddings[:10].numpy())
```
- [x] BERT
- [x] GPT
- [x] GPT-2
- [ ] Transformer-XL
- [x] XLNet
- [x] XLM
- [x] RoBERTa
- [x] DistilBERT
- [x] CTRL
- [x] CamemBERT
- [x] ALBERT
- [x] T5
- [x] XLM-RoBERTa
- [x] FlauBERT
- [x] ELECTRA
| 05-14-2020 00:30:32 | 05-14-2020 00:30:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=h1) Report
> Merging [#4351](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e8db8b845a971b0cf63a0896b9deb5b316028a8b&el=desc) will **increase** coverage by `0.05%`.
> The diff coverage is `42.50%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4351 +/- ##
==========================================
+ Coverage 76.80% 76.86% +0.05%
==========================================
Files 128 128
Lines 21602 21669 +67
==========================================
+ Hits 16591 16655 +64
- Misses 5011 5014 +3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `74.48% <ø> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.26% <13.33%> (-2.46%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.61% <33.33%> (-0.78%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `82.43% <33.33%> (-0.45%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `90.33% <33.33%> (-0.96%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.68% <33.33%> (-0.64%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.87% <33.33%> (-0.71%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.54% <33.33%> (-0.31%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `80.03% <33.33%> (-0.27%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.11% <50.00%> (-0.23%)` | :arrow_down: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=footer). Last update [e8db8b8...8dc8f96](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>hi team- i know you guys are busy. do you mind taking a look when you have a chance & let me know w/ any comments? thanks!<|||||>Hi- Thanks for the feedback. Sure, I can.
Sorry context switching is difficult at times :) I remember that `TFBertMainLayer` implementation was slightly different because it does not inherit from `TFPreTrainedModel` as T5 does. So it requires its own `_get_resized_embeddings` and test modules. But to the extent other architectures are similar to `T5`, it shouldn't be a problem.
Is there a source of truth for all the TF architectures implemented? Is https://github.com/huggingface/transformers#model-architectures kept up to date?
Deniz
<|||||>Yes, that list is kept up to date!<|||||>@LysandreJik please take a look. thanks!<|||||>This looks good! Pinging @jplu for review as he's the TF master around here.<|||||>Thank you @jplu @LysandreJik ! Looking forward to contributing more. |
transformers | 4,350 | closed | RobertaForSequenceClassification for BERT | # ❓ Questions & Help
Could you please explain, what are the consequences of using RobertaForSequenceClassification head with BERT pre-trained model? Will it be the same effect as using BertForSequenceClassification? | 05-13-2020 23:59:46 | 05-13-2020 23:59:46 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,349 | closed | Ability to specify directory of tensorboard logs in BART finetuning example | # 🚀 Feature request
Using [this script](https://github.com/huggingface/transformers/blob/839bfaedb21e42edee093b9e21e2c2f1ea7514f0/examples/summarization/bart/finetune.py) to finetune BART, it'd be useful if we could specify the directory to store the tensorboard logs. Right now, it looks like the logs are saved to `lightning_logs/version_X`.
Ideally, this is the command I would run:
```{bash}
python finetune.py \
--data_dir=./cnn-dailymail/cnn_dm \
--model_name_or_path=bart-large \
--learning_rate=3e-5 \
--train_batch_size=4 \
--eval_batch_size=4 \
--output_dir=$OUTPUT_DIR \
--do_train \
--logging_dir logs $@
```
## Motivation
Being able to specify the directory of your tensorboard logs makes it easier to compare multiple experiments.
| 05-13-2020 23:58:26 | 05-13-2020 23:58:26 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,348 | closed | Add link to W&B to see whole training logs | 05-13-2020 22:59:43 | 05-13-2020 22:59:43 | love it, thanks Manuel! 👍 <|||||>You are welcome. From now on I will add it to all my models. |
|
transformers | 4,347 | closed | TracerWarning on modeling_gpt2.py:147 when using Torchscript | The following script:
```
model = GPT2LMHeadModel.from_pretrained('gpt2-medium')
model.eval()
example = torch.randn(100, 50).abs().mul(1000).type(torch.int64)
traced_model = torch.jit.trace(model, example)
```
I
It's returning the following Warning:
```
python3.7/site-packages/transformers/modeling_gpt2.py:147: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = self.bias[:, :, ns - nd : ns, :ns]
```
Is this something to worry about? I see there was a similar issue with sqrt in https://github.com/huggingface/transformers/issues/3954
Additionally, if I wanted to use Torchscript with past, then I should still trace the model in such way so that the past is being recorded in the model. Is that correct?
- `transformers` version: 2.9.0
- Platform: macos
- Python version: 3.7
- PyTorch version: 1.5.0 cpu
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 05-13-2020 21:42:17 | 05-13-2020 21:42:17 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,346 | closed | Discrepancy in the generation of T5 here vs the original code | # ❓ Questions & Help
We have [several models](https://github.com/allenai/unifiedqa) train with the original T5 code. Now trying the models in your code, we see some discrepancies. For example, on this input: "Which is best conductor? \n (A) iron (B) feather (C) wood (D) plastic" while [the original code](https://unifiedqa.apps.allenai.org/) produces "iron", the Transformers code produces: "feather (C) feather (C) feather (C) feather (C) feather (C"
## Details
Here's what I've done:
First I downloaded a subset of the model directory, as I only need some files:
```
$ ls -l
total 316048
-rw-r--r-- 1 tafjord staff 387 May 12 18:51 checkpoint
-rw-r--r-- 1 tafjord staff 8 May 12 18:01 model.ckpt-1100500.data-00000-of-00002
-rw-r--r-- 1 tafjord staff 121752064 May 12 18:01 model.ckpt-1100500.data-00001-of-00002
-rw-r--r-- 1 tafjord staff 5677 May 12 18:01 model.ckpt-1100500.index
-rw-r--r-- 1 tafjord staff 26892327 May 12 18:02 model.ckpt-1100500.meta
```
Then edit the checkpoint file so it refers to the right checkpoint in the first line:
```
$ cat checkpoint
model_checkpoint_path: "model.ckpt-1100500"
all_model_checkpoint_paths: "model.ckpt-1000000"
all_model_checkpoint_paths: "model.ckpt-1020100"
...
```
Now, can run this with Transformers 2.9:
```python
from transformers import T5Config, T5Tokenizer, T5ForConditionalGeneration
from transformers.modeling_t5 import load_tf_weights_in_t5
base_model = "t5-small"
tokenizer = T5Tokenizer.from_pretrained(base_model)
model = T5ForConditionalGeneration(T5Config.from_pretrained(base_model))
load_tf_weights_in_t5(model, None, "/Users/tafjord/models/t5/unifiedqa-small/")
model.eval()
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
return [tokenizer.decode(x) for x in res]
```
This agrees with the output of T5 code:
```python
run_model("Which is best conductor? \n (A) iron (B) feather")
```
which gives: `['iron']`
```python
run_model("Scott filled a tray with juice and put it in a freezer. The next day, Scott opened the freezer. How did the juice most likely change? \n (A) It condensed. (B) It evaporated. (C) It became a gas. (D) It became a solid.")
```
which produces: `['it condensed.']`. The original T5 code produces 'It condensed." (with a capital letter).
This doesn't work at all:
```python:
run_model("Which is best conductor? \n (A) iron (B) feather (C) wood (D) plastic")
```
which produces: `['feather (C) feather (C) feather (C) feather (C) feather (C']` (the T5 code says "iron")
So maybe I'm missing some weights or some preprocessing, haven't dug into that yet? Also some simple beam search settings are not helping here:
```python
run_model("Which is best conductor? \n (A) iron (B) feather (C) wood (D) plastic",
temperature=0.7, num_return_sequences=4, num_beams=20)
```
which produces:
```
['feather (C) feather (C) feather (C) feather (C) feather (C',
'feather (C) feather (C) feather (C) feather (C) feather',
'feather (C) feather (C) feather (C) feather',
'feather (C) feather (C) feather']
```
FYI @OyvindTafjord | 05-13-2020 19:23:12 | 05-13-2020 19:23:12 | I think you should add a prefix, such as `"question: Which is the best conductor"` for hugging face code. Can you try that and see whether the results improve? |
transformers | 4,345 | closed | Add image and metadata | Unfortunately i accidentally orphaned my other PR | 05-13-2020 18:00:11 | 05-13-2020 18:00:11 | [model page](https://huggingface.co/ViktorAlm/electra-base-norwegian-uncased-discriminator) |
transformers | 4,344 | closed | Added the feature to provide a generation prompt for encoder-decoder models. | Added the `decoder_input_ids` argument to `.generate(...)`, which should be used (only) with encoder-decoder models to start generation from some sequence. | 05-13-2020 16:45:08 | 05-13-2020 16:45:08 | The code quality check complains about lines being too long in modeling_utils.py.
There are several unchanged places where this is already the case. Should I ignore this?<|||||>Also adding @yjernite since we talked about something similar<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>will re-open after sampling re-factor |
transformers | 4,343 | closed | [docs] XLNetLMHeadModel example in documentation does not produce the right probabilities | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLNet
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ X ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X ] my own task or dataset: (give details below)
## To reproduce
Pasting in the example given at https://huggingface.co/transformers/model_doc/xlnet.html#transformers.XLNetLMHeadModel.forward but changing the words, I get nonsensical probabilities:
```{python}
from transformers import XLNetTokenizer, XLNetLMHeadModel
import torch
tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetLMHeadModel.from_pretrained('xlnet-large-cased')
# The same way can the XLNetLMHeadModel be used to be trained by standard auto-regressive language modeling.
input_ids = torch.tensor(tokenizer.encode("Hit the nail on the <mask>", add_special_tokens=False)).unsqueeze(0) # We will predict the masked token
labels = torch.tensor(tokenizer.encode("head", add_special_tokens=False)).unsqueeze(0)
assert labels.shape[0] == 1, 'only one word will be predicted'
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, -1] = 1.0 # Previous tokens don't see last token as is done in standard auto-regressive lm training
target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float) # Shape [1, 1, seq_length] => let's predict one token
target_mapping[0, 0, -1] = 1.0 # Our first (and only) prediction will be the last token of the sequence (the masked token)
outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping, labels=labels)
loss, next_token_logits = outputs[:2] # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]
```
I get loss equal to `5.9626` which means the probability P(head | Hit the nail on the __) = 0.003, much too low. Also, looking at `next_token_logits`, the maximum probability word appears to be nonsensical:
```
In [137]: tokenizer.decode([next_token_logits.argmax()])
Out[137]: 'the'
```
So using this code, the most likely word after "Hit the nail on the" would appear to be "the". I don't think these are the real model probabilities, because if so then text generation would be extremely off. More likely the code is not accessing the right probabilities.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
True model probabilities for words in context, and in particular the test case:
P(nail | Hit the nail on the ___) = a high number
P(the | Hit the nail on the ___) = a lower number
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-4.15.0-88-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.7
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-13-2020 16:45:00 | 05-13-2020 16:45:00 | `XLNet` usually requires some padding to work well, see https://huggingface.co/transformers/usage.html#text-generation.
You could try to use padding as shown here: https://huggingface.co/transformers/usage.html#text-generation and see whether the problem persists?<|||||>This resolves the problem. |
transformers | 4,342 | closed | Added the option to seed generation in encoder-decoder models | Added the `decoder_input_ids` argument to `.generate()`, which acts as `input_ids` for the decoder instead of fixed BOS tokens used now. | 05-13-2020 16:16:53 | 05-13-2020 16:16:53 | |
transformers | 4,341 | closed | rerun notebook 02-transformers | Just a notebook rerun (some cells were not executed in the original one).
Few minor changes:
- removed installation cell output for readability (it can't be collapsed in GitHub notebook viewer)
- added prints with tensors details in the last cell (DE BERT model)
I'm unable to attach nbdiff as html (due to github limitations), hence adding both as zipped
[nbdiff_02-transformers.zip](https://github.com/huggingface/transformers/files/4623078/nbdiff_02-transformers.zip) and the link to [Gdrive](https://drive.google.com/open?id=1VlucbmczZjUOI8nuCJCAxFoKgjFsJKw0) for your convenience.
| 05-13-2020 16:02:30 | 05-13-2020 16:02:30 | Thanks @nikitajz,
LGTM ! 👍 |
transformers | 4,340 | closed | Support multitask learning | # 🚀 Feature request
There should be an easy way to support multitask learning.
## Motivation
It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks).
The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning:
- **Examples-proportional mixing** - sample from tasks proportionally to their dataset size
- **Equal mixing** - sample uniformly from each task
- **Temperature-scaled mixing** - The generalized approach used by multilingual BERT which uses a temperature T, where the mixing rate of each task is raised to the power 1/T and renormalized. When T=1 this is equivalent to equal mixing, and becomes closer to equal mixing with increasing T.
This definitely seems like a reusable component that would allow replication of the headline results used by many models.
The `run_glue.py` example only trains using a single GLUE task at a time, so I'm assuming people have made their own modifications to allow multitask learning. It seems especially sensible that there should be a way of training the T5 model in the multitask setting as was originally intended by the authors.
Maybe this could be an extension of the Trainer or a wrapper around multiple Datasets? Or even just an example.
## Your contribution
I can certainly help with implementation, though would need guidance on the best place to add this functionality.
| 05-13-2020 15:54:33 | 05-13-2020 15:54:33 | Having thought about this more, it would probably make most sense as an example of a PyTorch `Sampler`which implements temperature-scaled mixing on `ConcatDataset([dataset1, dataset2,...])`.
I'll attempt it using the t5 model firstly, and contribute it back if successful.<|||||>Hi, @ghomasHudson,
This seems like a great addition to hugging face, I have been looking around as well for multi-task learning. Most libraries out there don't support an easy way to achieve this and a lot of people seem to request this feature.
I believe there are two way to achieve multitask learning:
1) As you mentioned, treating i/p & o/p as "text to text" and most problems can fit in this , so the only change I guess is at the data sampler level. It should be easy to support
2) Another common way is ,having multiple "heads" for different tasks, and each task has a shared bert. So, essentially bert is learning on different tasks. There is no easy way to abstract things out for this in hugging face for this yet.
As you are trying out 1). Do you think it's worth investing time in the second approach too?
I have this weekend free, might as well raise a PR for the same .<|||||>@ghomasHudson I think these forms of *multi-task dataset mixing* can also be integrated straight into the new `nlp` library.<|||||>@avhirupc I've been focusing on 1. as it seems to be the simplest (and currently the best performing on GLUE). I think 2. would still rely on this and extend it further (choosing which head to train based on the task, and a bit of modelling work).
Multitask learning seems such an obvious part of current NLP approaches so I'm surprised more people aren't requesting it (Maybe it's more of a research-y aim than a production one?)
My current approach I'm working on is simply using `ConcatDataset` with weights decided using Temperature-scaled mixing. Something like this:
``` python
def temperature_to_weights(dataset_lengths, temperature=2.0, maximum=None, scale=1.0):
'''Calculate mixing rates'''
mixing_rates = []
for length in dataset_lengths:
rate = length * scale
if maximum:
rate = min(rate,maximum)
if temperature != 1.0
rate = rate ** (1.0/temperature)
mixing_rates.append(rate)
return mixing_rates
datasets = [Dataset1(), Dataset2()]
dataset_lengths = [len(d) for d in datasets]
dataset_weights = temperature_to_weights(dataset_lengths)
# Calculate weights per sample
weights = []
for i in range(len(datasets)):
weights += [dataset_weights[i]] * len(datasets[i])
dataloader = Dataloader(ConcatDataset(datasets),
sampler=WeightedRandomSampler(
num_samples=min(dataset_lengths),
weight=weights,
replacement=False)
)
```
There's still a few things I'm unclear about (e.g. what should the `num_samples` be? clearly if we sample everything it's just the same as not doing any balancing at all). Would be nice to have a `MultitaskSampler` if I can work it out.
@enzoampil I'm open to ideas about the best way to do this in terms of interfacing with the library. Should properly open an issue over at [nlp](https://github.com/huggingface/nlp).<|||||>Hi @patrickvonplaten & @thomwolf ,
Do you think working on 2. is a good enhancement to hugging face. Multitask learning is common in industry as well as research. Similar to MT-DNN (https://arxiv.org/pdf/1901.11504.pdf)
If you feel it is a good enhancement to hugging face, I could work on it and raise a PR for the same.
Summarising:
Having task-specific heads eg classification, token classification, and an ability to train multiple tasks at once? <|||||>Hi everybody,
I think the `nlp` library is the best place for this as @enzoampil said. |
transformers | 4,339 | closed | TPU needs a rendezvous | When running on TPU, each ordinal needs to call `save_pretrained`. See [original implementation by @jysohn23](https://github.com/huggingface/transformers/pull/3702/files#diff-28e1baa470caa8e6f23b78a356b6bbdfR162).
If we don't do it this way, the trainer hangs when saving the model. | 05-13-2020 15:12:14 | 05-13-2020 15:12:14 | |
transformers | 4,338 | closed | can't load checkpoint file from examples/run_language_modeling.py | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
**GPT2**
Language I am using the model on (English, Chinese ...):
**English**
The problem arises when using:
* [x ] the official example scripts: (give details below)
There seems to be no supported way of continuing training or evaluating a previously saved model checkpoint.
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below)
just trying to train/eval on wikitext-2 dataset
## To reproduce
Steps to reproduce the behavior:
1. python ../examples/language-modeling/run_language_modeling.py ^
--output_dir=output ^
--overwrite_output_dir ^
--tokenizer=gpt2 ^
--model_type=gpt2 ^
--model_name_or_path=output/pytorch.pytorch_model.bin ^
--do_eval ^
--per_gpu_eval_batch_size=1 ^
--eval_data_file=%userprofile%/.data/wikitext-2/wikitext-2/wiki.test.tokens
This gives an error because "model_name_or_path" is assumed to be a JSON file that contained pretrained model info, not a saved checkpoint file. The error that occurs here is when trying to load the CONFIG file associated with a pretrained model.
I also tried to create a new "model_checkpoint" argument that I then pass into AutoModelWithLMHead.from_pretrained(), but that ends up with a model/checkpoint mismatch (looks like hidden size in checkpoint file =256, but current model=768). In my usage here, I have never changed the hidden size - just did the "do-train" option and it saved my checkpoints to the output directory. And now, I am just trying to verify I can eval on a checkpoint, and then also continue training on a checkpoint.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expected to be able to specify an checkpoint_path argument in the run_language_modeling.py that would load the checkpoint file and let me continue training on it and/or evaluate it.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no | 05-13-2020 12:34:28 | 05-13-2020 12:34:28 | `--model_name_or_path` should be a folder, so you should use just `./output` instead.<|||||>Thanks. Verified - that fixed it. Please add a note n the README.md to explain this. Thanks.<|||||>Hi, may I ask how did you get these checkpoint files? I tried to specify the path to the checkpoint that is generated by the script during training (containing _config.json_, _optimizer.pt_, _pytorch_model.bin_, _scheduler.pt_, _training_args.bin_), but I met with a Traceback like this
```
Traceback (most recent call last):
File "run_language_modeling.py", line 277, in <module>
main()
File "run_language_modeling.py", line 186, in main
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
File "H:\Anaconda3\envs\env_name\lib\site-packages\transformers\tokenization_auto.py", line 203, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "H:\Anaconda3\envs\env_name\lib\site-packages\transformers\tokenization_utils.py", line 902, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "H:\Anaconda3\envs\env_name\lib\site-packages\transformers\tokenization_utils.py", line 1007, in _from_pretrained
list(cls.vocab_files_names.values()),
OSError: Model name 'C:\\path-to-ckpt\\checkpoint-17500' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). We assumed 'C:\\path-to-ckpt\\checkpoint-17500' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
```
which technically says that the checkpoint folder misses some other files. I wonder where this mismatch comes from if I used the same script to train.<|||||>Those who are new to this issue I just figured it out and save your time 😜😀
What is this error about?
==> When you run the model for the first time it downloads some files { pytorch_model.bin } and if your internet is broken accidentally between processes it will continue running the pipeline file without completely downloading that pytorch_model.bin file so it will raise this issue.
Steps :
1 ] Go to C:// Users / UserName / .cache
2 ] Delete .cache folder
3 ] And Done Just Run The Model Once Again......
You can connect me through @prashantmore999 { Twitter }
|
transformers | 4,337 | closed | Wrong dimensions of input to loss function, multilabel classification | # 🐛 Bug
## Information
**Model I am using (Bert, XLNet ...):**
bert-base-uncased
**Language I am using the model on (English, Chinese ...):**
English
**The problem arises when using:**
* [x] the official example scripts: (give details below)
[](url)
Problem originally arose when running my own scripts but I was able to reproduce with example code from [https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification](https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification)
**The tasks I am working on is:**
Multi-label classification
## To reproduce
Add `.from_pretrained("bert-base-uncased",num_labels=3)` and and 2 additional labels (`torch.tensor([1,0,0])`) to example code from docs, like so:
```
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained("bert-base-uncased",num_labels=3)
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
labels = torch.tensor([1,0,0]).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
```
> Traceback (most recent call last):
File "/Users/Simpan/agge/agust/playground/sve/test/loss_bug.py", line 8, in <module>
outputs = model(input_ids, labels=labels)
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/transformers/modeling_bert.py", line 1193, in forward
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/torch/nn/functional.py", line 2009, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/torch/nn/functional.py", line 1836, in nll_loss
.format(input.size(0), target.size(0)))
ValueError: Expected input batch_size (1) to match target batch_size (3).
## Expected behavior
Was expecting to get loss for output nodes vs. labels (3 in this case).
Furthermore, removing labels as input to model returns logits with right dimensions:
```
#outputs = model(input_ids, labels=labels)
outputs = model(input_ids)
print(outputs)
```
> (tensor([[ 0.1703, -0.0389, 0.0923]], grad_fn=<AddmmBackward>),)
## Environment info
- `transformers` version: 2.9.0
- Platform: Darwin-18.5.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.3.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No.
| 05-13-2020 11:30:52 | 05-13-2020 11:30:52 | If you are doing anything custom, then you should write your own model class, right? Plus your own loss function suiting your needs if default ones raises errors?
[This](https://discuss.pytorch.org/t/what-kind-of-loss-is-better-to-use-in-multilabel-classification/32203/) might help;<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,336 | closed | Unable to load weights from pytorch checkpoint file | # 🐛 Bug
## Information
I uploaded two models this morning using the `transformers-cli`. The models can be found on my [huggingface page](https://huggingface.co/mananeau). The folder I uploaded for both models contained a PyTorch model in bin format, a zip file containing the three TF model files, the `config.json` and the `vocab.txt`. [The PT model](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.bin) was created from TF checkpoints [using this code](https://huggingface.co/transformers/converting_tensorflow_models.html). I'm able to download the tokenizer using:
``tokenizer = AutoTokenizer.from_pretrained("mananeau/clinicalcovid-bert-base-cased")``.
Yet, when trying to download the model using:
``model = AutoModel.from_pretrained("mananeau/clinicalcovid-bert-base-cased")``
I am getting the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/anaconda3/lib/python3.7/site-packages/torch/serialization.py in _check_seekable(f)
226 try:
--> 227 f.seek(f.tell())
228 return True
AttributeError: 'NoneType' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
625 try:
--> 626 state_dict = torch.load(resolved_archive_file, map_location="cpu")
627 except Exception:
~/anaconda3/lib/python3.7/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
425 pickle_load_args['encoding'] = 'utf-8'
--> 426 return _load(f, map_location, pickle_module, **pickle_load_args)
427 finally:
~/anaconda3/lib/python3.7/site-packages/torch/serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)
587
--> 588 _check_seekable(f)
589 f_should_read_directly = _should_read_directly(f)
~/anaconda3/lib/python3.7/site-packages/torch/serialization.py in _check_seekable(f)
229 except (io.UnsupportedOperation, AttributeError) as e:
--> 230 raise_err_msg(["seek", "tell"], e)
231
~/anaconda3/lib/python3.7/site-packages/torch/serialization.py in raise_err_msg(patterns, e)
222 " try to load from it instead.")
--> 223 raise type(e)(msg)
224 raise e
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-5-5451ffd9a3b5> in <module>
----> 1 model = AutoModel.from_pretrained("mananeau/clinicalcovid-bert-base-cased")
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
425 for config_class, model_class in MODEL_MAPPING.items():
426 if isinstance(config, config_class):
--> 427 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
428 raise ValueError(
429 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
627 except Exception:
628 raise OSError(
--> 629 "Unable to load weights from pytorch checkpoint file. "
630 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. "
631 )
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.0
- Platform: Ubuntu 18.04
- Python version: 3.7.4
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?): 1.14.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-13-2020 10:17:24 | 05-13-2020 10:17:24 | I had this problem today,too. I create a new container as my new gpu environment, but cannot load any pretrained due to this error, but the same load pretrained codes are normal run on my old enviroment to download the pretrained<|||||>Hi @mananeau, when I look on the website and click on "show all files" for your model, it only lists the configuration and vocabulary. Have you uploaded the model file?<|||||>I believe I did. It can be found under [this link](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.bin). Also, when doing `transformers-cli s3 ls`, I get this output:

<|||||>I have this problem too, and I do have all my files

<|||||>For this problem
I switched to pip install with the repository of tranformers=2.8 which had been download in my old environment.
It normal works to download and load any pretrained weight
I don't know why, but it's work<|||||>> I switched to pip install with the repository of tranformers=2.8 which had been download in my old environment.
I cannot confirm this on my end. Tried with transformers==2.8.0 and still getting the same error. <|||||>@mananeau We could make it clearer/more validated, but the upload CLI is meant to use only for models/tokenizers saved using the .save_pretrained() method.
In particular here, your model file should be named `pytorch_model.bin`
<|||||>> For this problem
> I switched to pip install with the repository of tranformers=2.8 which had been download in my old environment.
>
> It normal works to download and load any pretrained weight
>
> I don't know why, but it's work
This worked for me too<|||||>hi !!
When i try this code explore_model.ipynb from https://github.com/sebkim/lda2vec-pytorch, the following error occurs .

how to resolve it ?? someone help me plz<|||||>@fathia-ghribi this is unrelated to the issue here. Please open a new issue and fill out the issue template so that we may help you. On a second note, your error does not seem to be related to this library.<|||||>I had this problem when I trained the model with `torch==1.6.0` and tried to load the model with `1.3.1`. The issue was fixed by upgrading to `1.6.0` in my environment where I'm loading the model.<|||||>Had same error on `torch==1.8.1` and `simpletransfomers==0.61.4`
downgrading torch or simpletransfomers doesn't work for me, because the issue caused by the file - not properly downloaded.
I solved this issue with git clone my model on local, or upload model files on google drive and change directory.
```
model = T5Model("mt5", "/content/drive/MyDrive/dataset/outputs",
args=model_args, use_cuda=False, from_tf=False, force_download=True)
```<|||||>>
>
> For this problem
> I switched to pip install with the repository of tranformers=2.8 which had been download in my old environment.
>
> It normal works to download and load any pretrained weight
>
> I don't know why, but it's work
thank youuuuuu <|||||>I had the same problem with:
```
sentence-transformers 2.2.0
transformers 4.17.0
torch 1.8.1
torchvision 0.4.2
Python 3.7.6
```
I solved it by upgrading torch with `pip install --upgrade torch torchvision`. Now working with
```
sentence-transformers 2.2.0
transformers 4.17.0
torch 1.10.2
torchvision 0.11.3
Python 3.7.6
```<|||||>In my case, there was something problem during moving the files, so `pytorch_model.bin` file existed but the size was 0 byte. After replacing it with correct file, the error removed.<|||||>Just delete the corrupted cached files and rerun your code; it will work.<|||||>> Just delete the corrupted cached files and rerun your code; it will work.
Yes, it works for me<|||||>I had to downgrade from torch 2.0.1 to 1.13.1.<|||||>>
does it work? I had the same problem
|
transformers | 4,335 | closed | [MbartTokenizer] save to sentencepiece.bpe.model | There is a mismatch between VOCAB_FILES_NAMES for XLMRobertaTokenizer and MBartTokenizer.
When saving tokenizer files with save_pretrained(), XLMRobertaTokenizer's vocab_file_name is used, while during loading using load_pretrained(), MBartTokenizer's vocab_file_name is used.
| 05-13-2020 08:50:20 | 05-13-2020 08:50:20 | Thanks!
Can you verify that
```bash
RUN_SLOW=1 pytest --tb=short -p no:warnings tests/test_modeling_bart.py -sv
```
works with this change?<|||||>Hi Sam!
Yes, the test passes on my machine.<|||||>I think you need to run `make style` for `check_code_quality` to pass.<|||||>Thanks, I just did but now black is complaining about other files being reformated. <|||||>sorry about that, there were some changes in the interim.
Try
```bash
pip install flake8 --upgrade
make style
```
you might also need to `git merge master` or `git rebase master`
At the end, this PR should only change `tokenization_bart.py`.
<|||||>Hi Sam!
The PR is now passing the tests. Is there anything else you would like to update?<|||||>Nope, thanks for contributing! Merging! |
transformers | 4,334 | closed | [bug in run_glue.py] GlueDataset with no local_rank when init | https://github.com/huggingface/transformers/blob/241759101e7104192d01a07fc70432fa02ae8cb7/examples/text-classification/run_glue.py#L137
GlueData need to add local_rank parameter when init
```python
train_dataset = GlueDataset(data_args, tokenizer=tokenizer, local_rank=training_args.local_rank) if training_args.do_train else None
``` | 05-13-2020 08:21:39 | 05-13-2020 08:21:39 | Not on master.
You should install from source when running the examples, as specified in the [README](https://github.com/huggingface/transformers#run-the-examples). |
transformers | 4,333 | closed | Clarification of model upload instructions | ## What changed
In the `README.md`, I specified two details in the model upload part:
- added `folder` to the `./path/to/pretrained_model/` just to make sure it is straightforward
- added a comment line specifying that the configuration file should be entitled `config.json` for the model to appear on the website. Linked with [this issue](https://github.com/huggingface/transformers/issues/4322) I raised.
This [website page](https://huggingface.co/transformers/model_sharing.html) should also be modified accordingly but I wasn't sure where to include the changes.
## Additional potential improvements
- One extra thing I thought of was to specify what kind of model files is accepted when mentioning the folder but I didn't have information on that. My upload with PyTorch weights went smoothly. What about TF weights? Does uploading only the three TF model files work? Is that the case for TF1 and TF2 weights?
- One minor improvement could also be to mention earlier that the model name on hugging face will be the same as the name of the uploaded folder. In my case, I read this after uploading the folder and I had to delete the files on S3 and re-upload with a new folder name that suited me better.
| 05-13-2020 07:37:20 | 05-13-2020 07:37:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=h1) Report
> Merging [#4333](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/241759101e7104192d01a07fc70432fa02ae8cb7&el=desc) will **decrease** coverage by `1.63%`.
> The diff coverage is `77.07%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4333 +/- ##
==========================================
- Coverage 78.18% 76.55% -1.64%
==========================================
Files 120 128 +8
Lines 20020 21502 +1482
==========================================
+ Hits 15652 16460 +808
- Misses 4368 5042 +674
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/commands/transformers\_cli.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFuc2Zvcm1lcnNfY2xpLnB5) | `0.00% <0.00%> (ø)` | |
| [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.33% <ø> (-0.15%)` | :arrow_down: |
| [src/transformers/configuration\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2NhbWVtYmVydC5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `35.71% <0.00%> (-64.29%)` | :arrow_down: |
| [src/transformers/configuration\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (-0.08%)` | :arrow_down: |
| ... and [152 more](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=footer). Last update [2417591...cca51b7](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,332 | closed | Extractive Text Summarization | # 🚀 Feature request
While the abstractive text summarization with T5 and Bart already achieve impressive results, it would be great to add support for state-of-the-art **extractive** text summarization, such as the recent [MatchSum](https://github.com/maszhongming/MatchSum) which outperforms [PreSum](https://github.com/nlpyang/PreSumm) by a significant margin.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
The Bart-based summarization is already pretty awesome.
However, I recently got this summary with Bart from a Bill Gates article:
> "We are seeing more and more people get sick and dying, and this is a good thing, but it also means that we have less time to prepare for the future."
It seems to me, that the extractive methods are still "less risky" while they can also achieve great results.
So adding an easy way to access one of the extractive methods, for example the new [MatchSum](https://github.com/maszhongming/MatchSum) algorithm, which also now released the pre-trained models for CNN/DM, would be really awesome! | 05-13-2020 06:54:17 | 05-13-2020 06:54:17 | Are you sampling tokens? If so, turn it off and maybe also turn up the beam size. That will give more extractive outputs<|||||>Thanks @Laksh1997 for the idea, unfortunately, the results don't get better.
This is the article I'm summarizing: https://www.gatesnotes.com/Health/Pandemic-Innovation
### Result with `min_length=500, max_length=1000` and default settings of the summarizer pipeline:
This is the first part of a two-part series on the impact of global warming. The second part of the series will look at ways to reduce the effects of climate change. The third part will focus on ways to prevent the spread of the disease. The fourth part will be on how we can make sure we don't see a repeat of what happened in the 1980s and 1990s. It will also look at how to make sure that we don’t see an increase in the number of people who need to be treated for the disease every time it rears its head. The final part of this series looks at ways we can reduce the impact on the economy of the global warming crisis by reducing the amount of money spent on health care. It is also a look at some of the ways in which we can prevent the disease from getting worse, such as by making sure we have better access to the right equipment and training for doctors and nurses. The last part will look back at how we were able to stop the disease’s spread in the first place, and how we’ve been able to do so since then. It’ll be interesting to see how we respond to the current crisis, which has caused a lot of people to lose their jobs and homes, as well as the loss of health care and the cost of living that has gone up by a third since the beginning of the year. We’re in the midst of a global pandemic, but we have a long way to go before we see the full extent of the damage caused by climate change, which is likely to be much worse in the coming years. We also need to look at what we can do to prevent it from happening in the future, including ways to make it easier for people to get the care they need. We need to make the most of the time we have left before it gets worse, and we need to do it in a way that makes it easier to get to the bottom of the problem. We can do this by focusing on what we are doing now, rather than focusing on the causes of the illness, which can be hard to come by in a small number of cases. We should also be looking for ways to keep the disease at a low level so that it doesn't spread as far and as fast as possible. We are seeing more and more people get sick and dying, and this is a good thing, but it also means that we have less time to prepare for the future.
### Result with `num_beams=8` and `do_sample=False`:
This is the first part of a two-part series on the impact of climate change on the U.S. and the world. The second part of the series will look at ways to reduce the effects of global warming. The third part will focus on how we can reduce the number of people affected by climate change. The fourth and final part will be a look at some of the ways we can make sure we don't suffer the same fate as those who have been affected by the climate change pandemics of the past few years. It will be the first of a series of articles on the topic, and will be followed by a series on climate change in the next few months. For more information, go to: http://www.cnn.com/2013/01/29/climate-change/index.html#storylink=cpy, and for more information on the Global Warming Program, visit: http://www.climatechange.org/2013-01-29/global-warming-program/. For more on the World Health Organization (WHO), go to www.welcome.org/. For information on how to get involved in the fight against climate change, visit the WHO’s website. For information about how to help people in need of financial assistance, visit www.worldhealth.org. For confidential support, call the Samaritans on 08457 90 90 90 or visit a local Samaritans branch, see www.samaritans.org for details. For support on suicide matters call the National Suicide Prevention Lifeline on 1-800-273-TALK (8255). For support in the UK, visit The Samaritans’ local branch, or click here. For help in the United States, see the National Institutes of Health (NHS), which has a range of programs that can help people cope with the changing nature of the threat to their health, such as the threat of pneumococcal meningitis, sepsis, stroke, and other ailments. For all the information you need, visit http:www.nhs.uk/news/publications/cnn/2014/07/09/world-health-paediatric-pneumonia-and-sickness-in-the-middle-of-a-drought.html. For the full series, see:http:/ / www.nhc.gov/newspeak/stories/2014-09-09/the-world-succeeding-against-climate-changes.html?title=World-warming-disease-infiltrating-crisis-initiative.
I have no idea where it gets the idea of climate change from :D
<|||||>@timsuchanek Note that transformer models can only consider context of up to N number of **subtokens** (in the case of BART I think N = 1024).
So, if the input context (the long document) is greater than this, it will be truncated to 1024 subtokens.
This means if you ask the decoder to generate more than what it can consider in context, it will at best copy the context, and at worse start to make up stuff.
I'm not sure if min_length and max_length refer to subtokens or tokens in the huggingface implementation.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,331 | closed | Add image and link to model paper | Now i've had my fun. I thought i could do better but it will do :) | 05-13-2020 05:39:12 | 05-13-2020 05:39:12 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=h1) Report
> Merging [#4331](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/241759101e7104192d01a07fc70432fa02ae8cb7&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4331 +/- ##
=======================================
Coverage 78.18% 78.18%
=======================================
Files 120 120
Lines 20020 20020
=======================================
Hits 15652 15652
Misses 4368 4368
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=footer). Last update [2417591...65588fe](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>lol, it's great<|||||>You can also add it as a metadata for it to display on social media:
```
---
language: norwegian
thumbnail: https://i.imgur.com/yxnE5GC.png
---
``` |
transformers | 4,330 | closed | Can't set attribute 'device' | I'm BERT finetunning, came across this problem
Here is my config, i'm running on 3rd GPU:
```json
BertConfig {
"attention_probs_dropout_prob": 0.1,
"device": "cuda:3",
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_hidden_states": true,
"output_past": true,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"type_vocab_size": 2,
"use_pooler": true,
"vocab_size": 105879,
"weight_class": [
1,
1
]
}
```
and here is the output error in terminal:
```
Traceback (most recent call last):
File "main.py", line 50, in <module>
model = BERTQa.from_pretrained(args.folder_model, config=config)
File "/home/dle/anaconda3/envs/phobert/lib/python3.7/site-packages/transformers/modeling_utils.py", line 622, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/dstore/home/dle/phobert/QA_PhoBERT/ZaloBert.py", line 10, in __init__
self.device = 'cuda:3'
File "/home/dle/anaconda3/envs/phobert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 638, in __setattr__
object.__setattr__(self, name, value)
AttributeError: can't set attribute
```
| 05-13-2020 05:10:49 | 05-13-2020 05:10:49 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I met the same issue, have you fixed? Thanks. |
transformers | 4,329 | closed | Cache directory | Where is the cache directory?
```
BertTokenizer.from_pretrained('bert-base-uncased')
```
I want to download manually and place it there.
```
OSError: Couldn't reach server at '{}' to download vocabulary files.``` | 05-13-2020 04:38:27 | 05-13-2020 04:38:27 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,328 | closed | wrong variable name used | Turned
`nlp.tokenizer.mask_token` to `fill_mask.tokenizer.mask_token` | 05-13-2020 03:37:38 | 05-13-2020 03:37:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=h1) Report
> Merging [#4328](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/241759101e7104192d01a07fc70432fa02ae8cb7&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4328 +/- ##
=======================================
Coverage 78.18% 78.18%
=======================================
Files 120 120
Lines 20020 20020
=======================================
+ Hits 15652 15653 +1
+ Misses 4368 4367 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.08% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=footer). Last update [2417591...f0bb172](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,327 | closed | 🐛 Trainer on TPU : KeyError '__getstate__' | # 🐛 Bug
## Information
Model I am using : **ELECTRA base**
Language I am using the model on : **English**
The problem arises when using:
* [ ] the official example scripts
* [x] **my own modified scripts**
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task
* [x] **my own task or dataset**
## To reproduce
I'm trying to fine-tune a model on Colab TPU, using the new Trainer API. But I'm struggling.
Here is a self-contained [Colab notebook](https://colab.research.google.com/drive/1J0m_ULnSHgtXs1FXni9O3VHPdZCMu9SZ?usp=sharing) to reproduce the error (it's a dummy example).
When running the notebook, I get the following error :
```
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 199, in __getattr__
return self.data[item]
KeyError: '__getstate__'
```
---
Full stack trace :
```
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 172, in _worker
batch = xm.send_cpu_data_to_device(batch, device)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 624, in send_cpu_data_to_device
return ToXlaTensorArena(convert_fn, select_fn).transform(data)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 307, in transform
return self._replace_tensors(inputs)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 301, in _replace_tensors
convert_fn)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/utils/utils.py", line 199, in for_each_instance_rewrite
return _for_each_instance_rewrite(value, select_fn, fn, rwmap)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/utils/utils.py", line 179, in _for_each_instance_rewrite
result.append(_for_each_instance_rewrite(x, select_fn, fn, rwmap))
File "/usr/local/lib/python3.6/dist-packages/torch_xla/utils/utils.py", line 187, in _for_each_instance_rewrite
result = copy.copy(value)
File "/usr/lib/python3.6/copy.py", line 96, in copy
rv = reductor(4)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 199, in __getattr__
return self.data[item]
KeyError: '__getstate__'
```
**Any hint on how to make this dummy example work is welcomed.**
## Environment info
- `transformers` version: **2.9.0**
- Platform: **Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic**
- Python version: **3.6.9**
- PyTorch version (GPU?): **1.6.0a0+cf82011 (False)**
- Tensorflow version (GPU?): **2.2.0 (False)**
- Using GPU in script?: **No**
- Using distributed or parallel set-up in script?: **No**
@jysohn23 @julien-c | 05-13-2020 02:54:57 | 05-13-2020 02:54:57 | As a temporary work-around, I made the `BatchEncoding` object pickable :
```python
from transformers.tokenization_utils import BatchEncoding
def red(self):
return BatchEncoding, (self.data, )
BatchEncoding.__reduce__ = red
```
Not closing yet, as this seems to be just a work-around and not a real solution.<|||||>Cc @mfuntowicz <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi,I got same error with transformers 2.9.0, and 2.9.1 following error:
------------
Traceback (most recent call last):
Traceback (most recent call last):
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
obj = _ForkingPickler.dumps(obj)
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 203, in __getattr__
return self.data[item]
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
obj = _ForkingPickler.dumps(obj)
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 203, in __getattr__
return self.data[item]
KeyError: '__getstate__'
KeyError: '__getstate__'
Traceback (most recent call last):
Traceback (most recent call last):
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
obj = _ForkingPickler.dumps(obj)
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 203, in __getattr__
return self.data[item]
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
obj = _ForkingPickler.dumps(obj)
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
KeyError: '__getstate__'
File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 203, in __getattr__
return self.data[item]
KeyError: '__getstate__'
------------------------------------------------------------------
My code pieces:
dl = DataLoader(data, batch_size=self.batch_size, shuffle=self.shuffle, collate_fn=partial(self.tok_collate),
num_workers=2)
return dl
def tok_collate(self, batch_data):
encoded = self.tokenizer.batch_encode_plus(
[x[0] for x in batch_data],
add_special_tokens=True,
#return_tensors='pt',
pad_to_max_length=True)
for i in range(len(encoded['input_ids'])):
print("tokens : {}".format([self.tokenizer.convert_ids_to_tokens(s) for s in encoded['input_ids'][i]]))
print("input_ids : {}".format(encoded['input_ids'][i]))
print("token_type_ids : {}".format(encoded['token_type_ids'][i]))
print("attention_mask : {}".format(encoded['attention_mask'][i]))
print('------------')
if self.predict:
return encoded
else:
labels = torch.tensor([x[1] for x in batch_data])
# print('labels: ', labels)
return encoded, labels<|||||>@Colanim How did you solve it?<|||||>I think it was fixed in the latest version of `transformers`.
If you need to work with an older version of `transformers`, for me the work-around I mentioned earlier was working :
> As a temporary work-around, I made the `BatchEncoding` object pickable :
>
> ```python
> from transformers.tokenization_utils import BatchEncoding
>
> def red(self):
> return BatchEncoding, (self.data, )
>
> BatchEncoding.__reduce__ = red
> ```
<|||||>Indeed, this should have been fixed in the versions `v3+`. Thanks for opening an issue @Colanim. |
transformers | 4,326 | closed | Train model from scratch with Tensorflow | First of all thank you for a great library. I am quite new to the library so sorry if my questions sound dumb. I see a tutorial to train a model from scratch (https://huggingface.co/blog/how-to-train) which is very useful. My questions are:
1/ It seems like that tutorial is for training using PyTorch. Is there any tutorial for Tensorflow?
2/ I am thinking about training an ALBERT model with just 6 layers. If possible I want to use the weights of the pre-trained ALBERT (since all layers share parameters) and also the structure but just change the number of layers from 12 to 6. Is it possible or I need to train entirely from scratch? | 05-13-2020 00:49:39 | 05-13-2020 00:49:39 | > 1/ It seems like that tutorial is for training using PyTorch. Is there any tutorial for Tensorflow?
[Here](https://github.com/stefan-it/turkish-bert/blob/master/CHEATSHEET.md) is a cheatsheet for pretraining using TF code. It is written for BERT but the training for ALBERT is similar.
Regarding pretraining code for ALBERT in TensorFlow, you will find relevant resources in [the official ALBERT repository](https://github.com/google-research/ALBERT). You will need a corpus in txt format with one sentence per line. Ideally after splitting the corpus in smaller shards, you can use the `create_pretraining_data.py` file to transform the txt files in tfrecords. After that, you can run `run_pretraining.py` to pretrain your model.
> 2/ I am thinking about training an ALBERT model with just 6 layers. If possible I want to use the weights of the pre-trained ALBERT (since all layers share parameters) and also the structure but just change the number of layers from 12 to 6. Is it possible or I need to train entirely from scratch?
This makes me think of what @VictorSanh et al. did in [DistilBERT](https://arxiv.org/abs/1910.01108), namely initializing with every other layer from a vanilla BERT base. The difference is they added a distillation loss for the student to learn from the teacher. I'm not sure whether you can modify the number of layers in the configuration (that could be) but pretraining with less layers might severely impact your model performance. One thing you could do to maximize performance is to pretrain a normal 12-layer ALBERT base and then distil it (more on how to do this [here](https://github.com/huggingface/transformers/tree/master/examples/distillation)). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hello, Has there been anything done on this issue, if yes can you please share? |
transformers | 4,324 | closed | (v2) Improvements to the wandb integration | See #4220 and #4221 | 05-12-2020 23:00:52 | 05-12-2020 23:00:52 | |
transformers | 4,323 | closed | Fix FFN dropout in TFAlbertLayer, and split dropout in TFAlbertAttent… | …ion into two separate dropout layers.
1) The dropout in TFAlbertLayer's FFN submodule was never used, since it was applied to hidden_states, which was immediately overwritten. Fixed to apply to ffn_output.
2) Likewise, the dropout in TFAlbertAttention defaulted to the dropout defined in TFBertAttention. The ALBERT paper handles this a bit differently, instead having two separate parameters controlling dropout probabilities. See https://github.com/google-research/albert/blob/master/modeling.py#L971-L993 for an example of how it is coded in the original repo. The `attention_1` scope uses `attention_probs_dropout_prob`, while the `output` scope uses `hidden_dropout_prob`.
Since the default dropout probabilities are 0 for ALBERT, these changes shouldn't affect the model accuracies when using default config. It will help practitioners and researchers who are hyperparameter tuning to have proper dropout implementation. | 05-12-2020 20:48:41 | 05-12-2020 20:48:41 | @LysandreJik Any thoughts?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Not stale<|||||>Will take care of the conflict and merge after https://github.com/huggingface/transformers/pull/6247 is merged.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=h1) Report
> Merging [#4323](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/155288f04ba9a5d0a0e4d5be4f6d4e808ad8cfff&el=desc) will **decrease** coverage by `2.57%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4323 +/- ##
==========================================
- Coverage 79.94% 77.37% -2.58%
==========================================
Files 153 153
Lines 27902 27907 +5
==========================================
- Hits 22307 21593 -714
- Misses 5595 6314 +719
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.15% <100.00%> (+0.10%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `88.39% <100.00%> (+0.04%)` | :arrow_up: |
| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-30.36%)` | :arrow_down: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: |
| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=footer). Last update [155288f...a6da32a](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, just merged with master and fixed conflicts.<|||||>Thanks a lot for your contribution @jarednielsen ! |
transformers | 4,322 | closed | Uploaded models not appearing on website | # 🐛 Bug
## Information
I uploaded two models (named `clinicalcovid-bert-base-cased` and `biocovid-bert-large-cased`) following the instructions on the website. The folder I uploaded for each model contains the configuration json file, the vocabulary in txt format, the three tensorflow models files and the pytorch model bin. They are now downloadable from S3 at the following links:
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `clinicalcovid-bert-base-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/bert_config.json) • [`tensorflow weights`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.ckpt.data-00000-of-00001) • [`tensorflow.meta`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.ckpt.meta) • [`tensorflow.index`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.ckpt.index) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/vocab.txt)
| `biocovid-bert-large-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/bert_config_bio_58k_large.json) • [`tensorflow weights`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/biocovid_bert_large_cased.ckpt.data-00000-of-00001) • [`tensorflow.meta`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/biocovid_bert_large_cased.ckpt.meta) • [`tensorflow.index`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/biocovid_bert_large_cased.ckpt.index) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/biocovid_bert.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/vocab_cased_pubmed_pmc_30k.txt)
Yet, they are not appearing on the [website](https://huggingface.co/mananeau). Any idea why that could be?
| 05-12-2020 20:02:26 | 05-12-2020 20:02:26 | Hi @mananeau we should improve validation and error messages soon, but the config.json file should be exactly named `config.json`
If you rename those files your models will pop up.
Are they TF2 models?<|||||>Got it, will try now.
> Are they TF2 models?
They are TF1.11 and TF1.15 models.
<|||||>It worked, thanks a lot for your swift reply :) I created a PR mentioned above to clarify the upload instructions in the `README.md`.
Keep on rocking!<|||||>> They are TF1.11 and TF1.15 models.
FYI you can host those models if you want, but the transformers library doesn't support them<|||||>(I'd be interested in hearing more about your use case here though) |
transformers | 4,321 | closed | Add modelcard with acknowledgements | Add Acknowledgements | 05-12-2020 18:15:43 | 05-12-2020 18:15:43 | |
transformers | 4,320 | closed | Question Answering for TF trainer | This PR add the Question Answering task in the Tensorflow trainer. For the moment the evaluation is not possible due to the complexity of the SQuAD metrics but the training is working. | 05-12-2020 17:31:22 | 05-12-2020 17:31:22 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=h1) Report
> Merging [#4320](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bf5042240d33286460b83f3dbf9be77500faab3&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4320 +/- ##
==========================================
- Coverage 78.16% 78.15% -0.01%
==========================================
Files 120 120
Lines 20005 20009 +4
==========================================
+ Hits 15636 15638 +2
- Misses 4369 4371 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4320/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `17.75% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4320/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `55.31% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4320/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (ø)` | |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4320/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.09% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4320/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=footer). Last update [4bf5042...2a6162b](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LMK when it's ready to merge<|||||>Good to merge! |
transformers | 4,319 | closed | AutoModel.from_pretrained with torchscript flag raises a TypeError: __init__() got an unexpected keyword argument 'torchscript' | # 🐛 Bug
## Information
Model I am using: BertModel and AutoModel
Language I am using the model on: English
## To reproduce
Steps to reproduce the behavior:
```python
from transformers.modeling_auto import AutoModel
from transformers.modeling_bert import BertModel
bert_model = BertModel.from_pretrained('bert-base-uncased', torchscript=True)
bert_model = AutoModel.from_pretrained('bert-base-uncased', torchscript=True)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Behaviour
`bert_model = AutoModel.from_pretrained('bert-base-uncased', torchscript=True)` raises a
`TypeError: __init__() got an unexpected keyword argument 'torchscript'`
## Expected behaviour
Successfully create a BertModel object using AutoModel class.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.0
- Platform: `Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64`
- Python version: 3.6.10
- PyTorch version: 1.3.1
- Tensorflow version: Not applicable
- Using GPU in script?: Not applicable
- Using distributed or parallel set-up in script?: Not applicable
| 05-12-2020 16:01:30 | 05-12-2020 16:01:30 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue still exists and is relevant to me. Can we get an official response?<|||||>Yes, I would also like an official answer, please.<|||||>It appears that `AutoConfig` accepts a `torchscript` keyword parameter. The `AutoConfig` object can then be passed as the `config` keyword parameter to `AutoModel`. Hope this workaround helps @jonsnowseven <|||||>Hi! This was fixed by https://github.com/huggingface/transformers/pull/5665. Could you try to install from source and try again? |
transformers | 4,318 | closed | [model_cards]: 🇹🇷 Add new ELECTRA small and base models for Turkish | Hi,
this PR introduces model cards for new ELECTRA small and base models for Turkish 🇹🇷.
More information (checkpoint evaluation, downstream evaluation on Pos Tagging and NER, loss curves, TensorBoards) see [this repository](https://github.com/stefan-it/turkish-bert/tree/electra/electra). | 05-12-2020 15:28:14 | 05-12-2020 15:28:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=h1) Report
> Merging [#4318](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bf5042240d33286460b83f3dbf9be77500faab3&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4318 +/- ##
==========================================
+ Coverage 78.16% 78.17% +0.01%
==========================================
Files 120 120
Lines 20005 20005
==========================================
+ Hits 15636 15638 +2
+ Misses 4369 4367 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.09% <0.00%> (+0.28%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=footer). Last update [4bf5042...dc686bb](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,317 | closed | How to use DistilBertTokenizer in C++ | I saw that fast tokenizers was realized as Rust library, but I not fund bindings for C++, and also I cannot use a pretrained distilled tokenizer in fast mode. | 05-12-2020 14:33:29 | 05-12-2020 14:33:29 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,316 | closed | Allow BatchEncoding to be initialized empty. | This is required by recent changes introduced in TF 2.2. | 05-12-2020 14:22:24 | 05-12-2020 14:22:24 | Good job finding this! |
transformers | 4,315 | closed | Update README.md | 05-12-2020 13:25:03 | 05-12-2020 13:25:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=h1) Report
> Merging [#4315](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bf5042240d33286460b83f3dbf9be77500faab3&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4315 +/- ##
=======================================
Coverage 78.16% 78.16%
=======================================
Files 120 120
Lines 20005 20005
=======================================
+ Hits 15636 15637 +1
+ Misses 4369 4368 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (ø)` | |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.09% <0.00%> (+0.28%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=footer). Last update [4bf5042...ef8a2ed](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 4,314 | closed | run_generation.py GPT2 model only using 1st GPU, OOM error | # 🐛 Bug
## Information
GPT2-xl (pre-trained)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
Run in jupyter notebok script below, GCP T4x2
## To reproduce
Steps to reproduce the behavior:
```
prompt_text="If God is defined as something that is all powerful and all knowing, a strong artificial intelligence might be an actual God. If this happens the implications for religion are"
num_of_responses=15
!python transformers/examples/text-generation/run_generation.py --model_type GPT2 --model_name_or_path /spell/GPT2Model/GPT2Model/ --length 1000 --stop_token '<|endoftext|>' --temperature .7 --repetition_penalty 85 --prompt "$prompt_text" --num_return_sequences $num_of_responses
```
Nvidia-smi during run only reports gpu1 using memory gpu2 sits at 11m and never moves
Ultimately program errors out with OOM error:
Traceback (most recent call last):
File "transformers/examples/text-generation/run_generation.py", line 268, in <module>
main()
File "transformers/examples/text-generation/run_generation.py", line 237, in main
num_return_sequences=args.num_return_sequences,
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py", line 1148, in generate
model_specific_kwargs=model_specific_kwargs,
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py", line 1190, in _generate_no_beam_search
outputs = self(**model_inputs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_gpt2.py", line 615, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_gpt2.py", line 499, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_gpt2.py", line 236, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_gpt2.py", line 191, in forward
present = torch.stack((key.transpose(-2, -1), value)) # transpose to have same shapes for stacking
RuntimeError: CUDA out of memory. Tried to allocate 84.00 MiB (GPU 0; 14.76 GiB total capacity; 13.69 GiB already allocated; 37.44 MiB free; 13.93 GiB reserved in total by PyTorch)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- `transformers` version: 2.9.0
- Platform: Linux-5.3.0-1018-gcp-x86_64-with-glibc2.10
- Python version: 3.8.2
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?:
- Using distributed or parallel set-up in script?: | 05-12-2020 12:50:21 | 05-12-2020 12:50:21 | I don't think we currently support multi-gpu in this script.<|||||>16 GB VRAM might not be enough to allow batching of generated texts.<|||||>Actually, if you install apex and are able to cast the model as FP16 using `model.half()`, you can get a batch size of up to 30 for the 1.5B. (nvidia-smi is reporting 14GB VRAM used on a single T4)
My tool is slightly different than the script, but I'll be sure to include bulk 1.5B generation as a documentation example.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,313 | closed | Update README.md | 05-12-2020 12:49:55 | 05-12-2020 12:49:55 | ||
transformers | 4,312 | closed | how can I make BERT word embedding faster? | Hello guys,
I have a code in which words get embedded with **Glove Embedding** for the number of iterations.
model.word_emb= utils.load_word_emb(args.glove_embed_path)
a =time.time()
w = "Transformer"
X = model.word_emb.get(w, model.word_emb['unk'])
b =time.time()
print(b-a)
and for each word glove embedding takes 0.000275 sec. Now I want to shift with BERT embedding but the problem is,
from transformers import BertModel, BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
bertModel = BertModel.from_pretrained('bert-base-uncased')
device = torch.device("cuda")
def bert_embedding(text):
input_ids = (torch.tensor(tokenizer.encode(text)).unsqueeze(0))
outputs = bertModel(input_ids)
return (outputs[0][0][1])
a =time.time()
w = "Transformers"
X = bert_embedding(w)
b =time.time()
print(b-a)
it takes around 0.04235 sec which is **154 times** slower than Glove
how should I use Bert but with a speed of glove embedding?
Thank you so much in advance! | 05-12-2020 12:23:54 | 05-12-2020 12:23:54 | It seems you're not actually putting anything on GPU. You need to call `bertModel.to(device)` and `input_ids.to(device)`.
The BERT model is much larger than GloVe, however, so you'll never get the same speed.<|||||>Thank you @LysandreJik. It actually increases speed but not that much.
Is there any model which is better than Glove and also fast in embedding like Glove.<|||||>The fatest model we have in the library that I know of would be `DistilBERT`, which would still be slower than GloVe, but faster than BERT.
GloVe and transformer models are very different concepts, and solve different problems, hence why transformer models are way bigger/slower.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,311 | closed | RobertaForQuestionAnsweringModel | There is no ready to use RobertaForQuestionAnsweringModel Class in transformers.
Please add it as you have done for other models. | 05-12-2020 11:59:41 | 05-12-2020 11:59:41 | Pull request welcome! |
transformers | 4,310 | closed | MobileBert form google-research added | # 🌟 New model addition
## Model description
[MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984)
Repo [link](https://github.com/google-research/google-research/tree/master/mobilebert)
## Open source status
* [x] Uploaded to HF model hub
[bert_config.json](https://s3.amazonaws.com/models.huggingface.co/bert/mrm8488/MobileBERT/bert_config.json)
[vocab.txt](https://s3.amazonaws.com/models.huggingface.co/bert/mrm8488/MobileBERT/vocab.txt)
[mobilebert_variables.ckpt.data-00000-of-00001](https://s3.amazonaws.com/models.huggingface.co/bert/mrm8488/MobileBERT/mobilebert_variables.ckpt.data-00000-of-00001)
[mobilebert_variables.ckpt.index](https://s3.amazonaws.com/models.huggingface.co/bert/mrm8488/MobileBERT/mobilebert_variables.ckpt.index)
[checkpoint](https://s3.amazonaws.com/models.huggingface.co/bert/mrm8488/MobileBERT/checkpoint)
* [ ] Convert model checkpoint to pytorch (or Keras h5, savedModel)
The [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py) to convert BERT models to Pytorch fails. Based on the error output the error comes from modeling [script](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py):
```
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 77, in load_tf_weights_in_bert
```
Whole error:
```
INFO:transformers.modeling_bert:Converting TensorFlow checkpoint from /content/mobilebert
Traceback (most recent call last):
File "/content/convert_bert_original_tf_checkpoint_to_pytorch.py", line 61, in <module>
convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.bert_config_file, args.pytorch_dump_path)
File "/content/convert_bert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_bert(model, config, tf_checkpoint_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 77, in load_tf_weights_in_bert
init_vars = tf.train.list_variables(tf_path)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/training/checkpoint_utils.py", line 97, in list_variables
reader = load_checkpoint(ckpt_dir_or_file)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/training/checkpoint_utils.py", line 62, in load_checkpoint
filename = _get_checkpoint_filename(ckpt_dir_or_file)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/training/checkpoint_utils.py", line 384, in _get_checkpoint_filename
return checkpoint_management.latest_checkpoint(ckpt_dir_or_file)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/training/checkpoint_management.py", line 341, in latest_checkpoint
if file_io.get_matching_files(v2_path) or file_io.get_matching_files(
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/lib/io/file_io.py", line 363, in get_matching_files
return get_matching_files_v2(filename)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/lib/io/file_io.py", line 384, in get_matching_files_v2
compat.as_bytes(pattern))
tensorflow.python.framework.errors_impl.NotFoundError: /cns/tp-d/home/tensorflow-tpus/hongkuny/bert/pretrained_models/uncased_L-24_H-128_B-512_A-4_F-4_OPT/oss; No such file or directory
```
Based on last line of the error log:
```
tensorflow.python.framework.errors_impl.NotFoundError: /cns/tp-d/home/tensorflow-tpus/hongkuny/bert/pretrained_models/uncased_L-24_H-128_B-512_A-4_F-4_OPT/oss; No such file or directory
```
It seems TF keep a kind for directory with the paths of their *availiable* models and this one is not there yet.
But maybe we can fix it using the modeling script in the repo of MobileBert: https://github.com/google-research/google-research/blob/master/mobilebert/modeling.py
Thougths?
| 05-12-2020 10:47:30 | 05-12-2020 10:47:30 | Will close this to track this at more general #4185<|||||>Please the " checkpoint" file to reset the path to "mobilebert_variables.ckpt"
This can solved my issue.
|
transformers | 4,309 | closed | Finetuning error: RuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel: Missing key(s) in state_dict: “transformer.h.0.attn.masked_bias”, “transformer.h.1.attn.masked_bias” | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 05-12-2020 10:15:59 | 05-12-2020 10:15:59 | I think we're missing some info here, can you add your complete code snippet to the issue's title?
### 🤣🤣
You can even add some binary data in there for good measure.
Hehe I'm just kidding :trollface: <|||||>Managed: I just didn’t account for the May 7 transformers release!
Sent from my iPhone
> On 12 May 2020, at 2:08 PM, Julien Chaumond <[email protected]> wrote:
>
>
> I think we're missing some info here, can you add your complete code snippet to the issue's title?
>
> 🤣🤣
>
> You can even add some binary data in there for good measure.
>
> Hehe I'm just kidding
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub, or unsubscribe.
<|||||>The same issue in Colab. I use DialoGPT repo but issue is the same<|||||>Help, please . Anyone. I can't load on other devices. I have copied the model from the device where it was trained on GPU, now I cannot load on other device with CPU.<|||||>It's hard to help with this much information. Can you open a new issue, respecting the template, letting us know what's your issue?<|||||>https://github.com/huggingface/transformers/commit/05deb52dc170226fcad899c240ee9568986d8d5a#diff-8530ad4c95b70580d10d594b7411a8be seemed to break this pretrained model, which was last modified on February 18. There masked_bias was introduced. Try reverting to a version before that, like cec3cdd. |
transformers | 4,308 | closed | MPNet: Masked and Permuted Pre-training for Language Understanding | # 🌟 New model addition
## Model description
[MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/pdf/2004.09297.pdf)
use both Masked Language Model and Permuted Language Model
## Open source status
* [MPNet ] the model implementation is available: (https://github.com/microsoft/MPNet)
* [MPNet pretrain weight ] the model weights are available: (https://modelrelease.blob.core.windows.net/pre-training/MPNet/mpnet.example.pt)
* [@tobyoup@StillKeepTry ] who are the authors: (mention them, if possible by @gh-username)
| 05-12-2020 09:47:35 | 05-12-2020 09:47:35 | Hi huggingface/transformers,
I am Xu Tan from Microsoft Research. Thanks for that you mentioned MPNet in huggingface/transformers repo.
MPNet is an advanced pre-trained models for language understanding, which inherits the advantages of BERT and XLNet, while well address their problems. We have MPNet pre-trained models, codes (the version for huggingface/transformers) and downstream task results ready.
How can we proceed to incorporate MPNet into huggingface/transformers repo? Thanks very much!
Best,
Xu
From: RyanHuangNLP <[email protected]>
Sent: Tuesday, May 12, 2020 5:48 PM
To: huggingface/transformers <[email protected]>
Cc: Xu Tan <[email protected]>; Mention <[email protected]>
Subject: [huggingface/transformers] MPNet: Masked and Permuted Pre-training for Language Understanding (#4308)
🌟 New model addition
Model description
MPNet: Masked and Permuted Pre-training for Language Understanding<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Farxiv.org%2Fpdf%2F2004.09297.pdf&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639246486&sdata=%2BE5kPD%2F1nyY9bgRtqr%2FCFf2YzsI8E%2BcTzWX0gcUSnWs%3D&reserved=0>
use both Masked Language Model and Permuted Language Model
Open source status
* [MPNet ] the model implementation is available: (https://github.com/microsoft/MPNet<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmicrosoft%2FMPNet&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639246486&sdata=%2Fp94TR%2BQVvRNdoio0kjL%2FjmGgtQb8SlcQouAnas8gmY%3D&reserved=0>)
* [MPNet pretrain weight ] the model weights are available: (https://modelrelease.blob.core.windows.net/pre-training/MPNet/mpnet.example.pt<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmodelrelease.blob.core.windows.net%2Fpre-training%2FMPNet%2Fmpnet.example.pt&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639256488&sdata=F8O2da%2FYPXS8AjmIVt4D%2FOaNLOYmSVgfyICRQuHf0BI%3D&reserved=0>)
* [@tobyoup<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Ftobyoup&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639266479&sdata=hJn0ZIEEzgukAII%2FtaXEl6ZmIt2jQqRwtSP2Hu1lujs%3D&reserved=0>@StillKeepTry ] who are the authors: (mention them, if possible by @gh-username<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fgh-username&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639266479&sdata=3ntmS0WjsV8DxOyrDyt7Qduc6jGMzU%2BfdcniizaYuLQ%3D&reserved=0>)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fissues%2F4308&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639276475&sdata=t3iSTAMR54mEsK%2Blgc9pjxuYZmnYQcwJjtje%2BQ05yeU%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FALQV7L4JWDDOBVHDVZYRCD3RRELMPANCNFSM4M6VMZPQ&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639276475&sdata=4496eTEWR5xHALKY9MGu6Aal%2Fx%2BU1dn87FkpASm72ME%3D&reserved=0>.
<|||||>+ RyanHuangNLP
From: Xu Tan
Sent: Sunday, July 5, 2020 6:58 PM
To: huggingface/transformers <[email protected]>; huggingface/transformers <[email protected]>
Subject: RE: [huggingface/transformers] MPNet: Masked and Permuted Pre-training for Language Understanding (#4308)
Hi huggingface/transformers,
I am Xu Tan from Microsoft Research. Thanks for that you mentioned MPNet in huggingface/transformers repo.
MPNet is an advanced pre-trained models for language understanding, which inherits the advantages of BERT and XLNet, while well address their problems. We have MPNet pre-trained models, codes (the version for huggingface/transformers) and downstream task results ready.
How can we proceed to incorporate MPNet into huggingface/transformers repo? Thanks very much!
Best,
Xu
From: RyanHuangNLP <[email protected]<mailto:[email protected]>>
Sent: Tuesday, May 12, 2020 5:48 PM
To: huggingface/transformers <[email protected]<mailto:[email protected]>>
Cc: Xu Tan <[email protected]<mailto:[email protected]>>; Mention <[email protected]<mailto:[email protected]>>
Subject: [huggingface/transformers] MPNet: Masked and Permuted Pre-training for Language Understanding (#4308)
🌟 New model addition
Model description
MPNet: Masked and Permuted Pre-training for Language Understanding<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Farxiv.org%2Fpdf%2F2004.09297.pdf&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639246486&sdata=%2BE5kPD%2F1nyY9bgRtqr%2FCFf2YzsI8E%2BcTzWX0gcUSnWs%3D&reserved=0>
use both Masked Language Model and Permuted Language Model
Open source status
* [MPNet ] the model implementation is available: (https://github.com/microsoft/MPNet<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmicrosoft%2FMPNet&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639246486&sdata=%2Fp94TR%2BQVvRNdoio0kjL%2FjmGgtQb8SlcQouAnas8gmY%3D&reserved=0>)
* [MPNet pretrain weight ] the model weights are available: (https://modelrelease.blob.core.windows.net/pre-training/MPNet/mpnet.example.pt<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmodelrelease.blob.core.windows.net%2Fpre-training%2FMPNet%2Fmpnet.example.pt&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639256488&sdata=F8O2da%2FYPXS8AjmIVt4D%2FOaNLOYmSVgfyICRQuHf0BI%3D&reserved=0>)
* [@tobyoup<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Ftobyoup&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639266479&sdata=hJn0ZIEEzgukAII%2FtaXEl6ZmIt2jQqRwtSP2Hu1lujs%3D&reserved=0>@StillKeepTry ] who are the authors: (mention them, if possible by @gh-username<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fgh-username&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639266479&sdata=3ntmS0WjsV8DxOyrDyt7Qduc6jGMzU%2BfdcniizaYuLQ%3D&reserved=0>)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fissues%2F4308&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639276475&sdata=t3iSTAMR54mEsK%2Blgc9pjxuYZmnYQcwJjtje%2BQ05yeU%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FALQV7L4JWDDOBVHDVZYRCD3RRELMPANCNFSM4M6VMZPQ&data=02%7C01%7CXu.Tan%40microsoft.com%7C36a49d494f604784087708d7f6598986%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637248737639276475&sdata=4496eTEWR5xHALKY9MGu6Aal%2Fx%2BU1dn87FkpASm72ME%3D&reserved=0>.
<|||||>Hi @tobyoup, it's very cool that you want to contribute your architecture!
You can follow this [checklist](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model) to add your model. Once you open a PR, we'll be very happy to help you finish the integration!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@tobyoup @LysandreJik Is there any update about MPNET in transformers?<|||||>We are actively preparing the integration codes, and will be ready these weeks.<|||||>@tobyoup Is MPNET. ready in transformers? Hello, any updates?<|||||>@tobyoup @LysandreJik Is there any update about MPNET in transformers?<|||||>~The PR for the MPNET integration has not seen any progress yet.~ @tobyoup, don't hesitate to ping us if you need help or pointers towards the integration.
There are ongoing efforts to implement MPNet on https://github.com/huggingface/transformers/pull/5804 by @StillKeepTry<|||||>Thanks for the reminder! We are adapting the codes these days and we expect to give the first PR by this weekend.
Thanks
Xu
From: Lysandre Debut <[email protected]>
Sent: Friday, November 13, 2020 1:27 AM
To: huggingface/transformers <[email protected]>
Cc: Xu Tan <[email protected]>; Mention <[email protected]>
Subject: Re: [huggingface/transformers] MPNet: Masked and Permuted Pre-training for Language Understanding (#4308)
The PR for the MPNET integration has not seen any progress yet. @tobyoup<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Ftobyoup&data=04%7C01%7CXu.Tan%40microsoft.com%7C9c1b4ed7fdb847535a2908d887302e38%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637407988352073356%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Q2NWAAxvh5LXn6d24FW7Uy4jBfs2%2BsodWvfPY3F%2BZVc%3D&reserved=0>, don't hesitate to ping us if you need help or pointers towards the integration.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fissues%2F4308%23issuecomment-726222432&data=04%7C01%7CXu.Tan%40microsoft.com%7C9c1b4ed7fdb847535a2908d887302e38%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637407988352073356%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=fnG9MzRgWBxG4tGKsoRJd0GCSHEH8AMvpNZFYNhOcJs%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FALQV7L6OVO3TCAHLX3UDVVLSPQLGXANCNFSM4M6VMZPQ&data=04%7C01%7CXu.Tan%40microsoft.com%7C9c1b4ed7fdb847535a2908d887302e38%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637407988352083352%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=kFPFzyRHfpImvd55Kr33XodHoQ8HtMlT8LhC5WDf4qA%3D&reserved=0>.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,307 | closed | why Building wheel for tokenizers (PEP 517) ... error? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Building wheels for collected packages: tokenizers
Building wheel for tokenizers (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /Users/xuyingjie/venv/answer_brand_marketing_recognition/bin/python /Users/xuyingjie/venv/answer_brand_marketing_recognition/lib/python3.6/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /var/folders/k2/n3kff0pd6k995znpn70jbn6w0000gq/T/tmpwslk_3c2
cwd: /private/var/folders/k2/n3kff0pd6k995znpn70jbn6w0000gq/T/pip-install-_reqdzyi/tokenizers
Complete output (36 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/tokenizers
copying tokenizers/__init__.py -> build/lib/tokenizers
creating build/lib/tokenizers/models
copying tokenizers/models/__init__.py -> build/lib/tokenizers/models
creating build/lib/tokenizers/decoders
copying tokenizers/decoders/__init__.py -> build/lib/tokenizers/decoders
creating build/lib/tokenizers/normalizers
copying tokenizers/normalizers/__init__.py -> build/lib/tokenizers/normalizers
creating build/lib/tokenizers/pre_tokenizers
copying tokenizers/pre_tokenizers/__init__.py -> build/lib/tokenizers/pre_tokenizers
creating build/lib/tokenizers/processors
copying tokenizers/processors/__init__.py -> build/lib/tokenizers/processors
creating build/lib/tokenizers/trainers
copying tokenizers/trainers/__init__.py -> build/lib/tokenizers/trainers
creating build/lib/tokenizers/implementations
copying tokenizers/implementations/byte_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/base_tokenizer.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/__init__.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/char_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/bert_wordpiece.py -> build/lib/tokenizers/implementations
copying tokenizers/__init__.pyi -> build/lib/tokenizers
copying tokenizers/models/__init__.pyi -> build/lib/tokenizers/models
copying tokenizers/decoders/__init__.pyi -> build/lib/tokenizers/decoders
copying tokenizers/normalizers/__init__.pyi -> build/lib/tokenizers/normalizers
copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib/tokenizers/pre_tokenizers
copying tokenizers/processors/__init__.pyi -> build/lib/tokenizers/processors
copying tokenizers/trainers/__init__.pyi -> build/lib/tokenizers/trainers
running build_ext
running build_rust
error: Can not find Rust compiler
----------------------------------------
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 05-12-2020 09:04:13 | 05-12-2020 09:04:13 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>i have this problem too , how did you resolve it?can you share your ideas? |
transformers | 4,306 | closed | Multiple token prediction with MLM | # 🚀 Feature request
It would be great if the `fill_mask` interface would be able to predict multiple tokens at a time.
## Motivation
I didn't find any related issue. Using transformer for chemical data, queries like `fill_mask('CCCO<mask>C')` work fine. But writing `fill_mask('CC<mask>CO<mask>C')` I obtain:
```
~/miniconda3/envs/paccmann/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
553 values, predictions = topk.values.numpy(), topk.indices.numpy()
554 else:
--> 555 masked_index = (input_ids == self.tokenizer.mask_token_id).nonzero().item()
556 logits = outputs[i, masked_index, :]
557 probs = logits.softmax(dim=0)
ValueError: only one element tensors can be converted to Python scalars
```
## Your contribution
The user can easily implement an auto-regressive solution and given that `fill_mask` returns the `topk` tokens, one could even select between greedy search, beam search or a probabilistic sampling. But a one-shot approach would be preferable since it minimizes probabilities to aggregate errors as in auto-regression. Rather than outsourcing this to the user, I'd prefer a solution integrated into the package.
As minimum request, I would like to propose to raise a proper error message instead of the obtained `ValueError`.
| 05-12-2020 07:28:01 | 05-12-2020 07:28:01 | An error message would be great, want to submit a PR?
For multiple masked token support, I'm not entirely sure. The sampling might be very use case-specific.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I'm facing a situation where I've to fetch probabilities from BERT MLM for multiple words in a single sentence.
Original : "Mountain Dew is an energetic drink"
Masked : "[MASK] is an energetic drink"
But BERT MLM task doesn't consider two tokens at a time for the MASK. <|||||>@stephen-pilli
Here's how to mask multiple tokens.
```
import torch
sentence = "The capital of France <mask> contains the Eiffel <mask>."
token_ids = tokenizer.encode(sentence, return_tensors='pt')
# print(token_ids)
token_ids_tk = tokenizer.tokenize(sentence, return_tensors='pt')
print(token_ids_tk)
masked_position = (token_ids.squeeze() == tokenizer.mask_token_id).nonzero()
masked_pos = [mask.item() for mask in masked_position ]
print (masked_pos)
with torch.no_grad():
output = model(token_ids)
last_hidden_state = output[0].squeeze()
print ("\n\n")
print ("sentence : ",sentence)
print ("\n")
list_of_list =[]
for mask_index in masked_pos:
mask_hidden_state = last_hidden_state[mask_index]
idx = torch.topk(mask_hidden_state, k=100, dim=0)[1]
words = [tokenizer.decode(i.item()).strip() for i in idx]
list_of_list.append(words)
print (words)
best_guess = ""
for j in list_of_list:
best_guess = best_guess+" "+j[0]
``` |
transformers | 4,305 | closed | fixed missing torch module import | fixed missing torch module import in example usage code | 05-12-2020 07:21:29 | 05-12-2020 07:21:29 | |
transformers | 4,304 | closed | Latest Version Of HuggingFace is Not Using GPUs | I switched from v2.8 to v2.9 today morning keeping all my experiments, enviornment same etc but the same code which used v2.8 has an epoch time of 5 mins, where as in v2.9 it's more than 2 hours.
Code was custom which was working till yesterday with v2.8.
Please verify the same. | 05-12-2020 06:42:36 | 05-12-2020 06:42:36 | If you don't tell us what code you use, we cannot help you. Be detailed and use code block. Don't post screenshots. <|||||>I agree. I apologize, was kinda annoyed that version diff can cause this. Will share a minimal example. (In the mean time, re-verifying via setting up different env's)<|||||>Did you manage to solve the error?<|||||>Hey I am sorry for the delay, I am having fever since yesterday, I am trying to replicate it. <|||||>Okay, so i am not able to replicate the same on a dummy test i had run just now; seems all good ; Will ping back if the same thing happens for my code; My apologies to great devs ;<|||||>@AdityaSoni19031997 I am also facing the same issue. New transformers version is not using GPU and it's quite slow.<|||||>I have solved the issue.
You have to manually set the device and load the tensors to GPU:
```
from transformers import BertTokenizer, BertModel, BertForMaskedLM
def assign_GPU(Tokenizer_output):
tokens_tensor = Tokenizer_output['input_ids'].to('cuda:0')
token_type_ids = Tokenizer_output['token_type_ids'].to('cuda:0')
attention_mask = Tokenizer_output['attention_mask'].to('cuda:0')
output = {'input_ids' : tokens_tensor,
'token_type_ids' : token_type_ids,
'attention_mask' : attention_mask}
return output
sentence = 'Hello World!'
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
model = BertModel.from_pretrained('bert-large-uncased')
inputs = assign_GPU(tokenizer(sentence, return_tensors="pt"))
model = model.to('cuda:0')
outputs = model(**inputs)
outputs
```<|||||>In my case, it was a conflict of driver's IIRC. |
transformers | 4,303 | closed | How to perform Agglomerative clustering on Albert model last hidden state output, it gives tuple of tensors for each sequence in a sentence. I am processing multiple sentences from a document, How to achieve this ? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 05-12-2020 06:24:37 | 05-12-2020 06:24:37 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,302 | closed | Remove args from args.tf_dump_path | Edit args.tf_dump_path in convert_all_pt_checkpoints_to_tf | 05-12-2020 05:57:49 | 05-12-2020 05:57:49 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,301 | closed | why squad.py did not reproduce squad1.1 report result? | # 📚 Migration
## Information
<!-- Important information -->
Model I am using (Bert, XLNet ...):
Language I am using the model on (English...):
The problem arises when using:
* [x] the official example scripts: (give details below)
examples/question-answering/run_squad.py
* [x] my own modified scripts: (give details below)
'''
CUDA_VISIBLE_DEVICES=5 python examples/question-answering//run_squad.py \
--model_type bert \
--model_name_or_path bert-large-uncased-whole-word-masking \
--do_train \
--do_eval \
--data_dir EKMRC/data/squad1.1 \
--train_file train-v1.1.json \
--predict_file dev-v1.1.json \
--per_gpu_eval_batch_size=4 \
--per_gpu_train_batch_size=4 \
--gradient_accumulation_steps=6 \
--save_steps 3682 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir result/debug_squad/wwm_uncased_bert_large_finetuned_squad/ \
--overwrite_output_dir
'''
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
## Details
But I did not reproduce the result reported, the repository say get result bellow:
```
python $SQUAD_DIR/evaluate-v1.1.py $SQUAD_DIR/dev-v1.1.json ../models/wwm_uncased_finetuned_squad/predictions.json
{"exact_match": 86.91579943235573, "f1": 93.1532499015869}
```
my result is below:
```
python $SQUAD_DIR/evaluate-v1.1.py $SQUAD_DIR/dev-v1.1.json ../models/wwm_uncased_finetuned_squad/predictions.json
{"exact_match": 81.03, "f1": 88.02}
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux gpu19 3.10.0-1062.4.1.el7.x86_64 #1 SMP Fri Oct 18 17:15:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
- Python version: python3.6
- PyTorch version (GPU?): 1.4.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: parallel
<!-- IMPORTANT: which version of the former library do you use? -->
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch):
current version of transformers.
## Checklist
- [yes ] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
| 05-12-2020 02:15:48 | 05-12-2020 02:15:48 | I have just solved this problem.
You have to set an additional flag: --do_lower_case.
I wonder why the run_squad.py behaves differently than run_glue.py, etc. Is there is a code improve on the way?<|||||>You shouldn't have to set `--do_lower_case` as it should be lowercased by default for that model.<|||||>> You shouldn't have to set `--do_lower_case` as it should be lowercased by default for that model.
I thought it is and it should be, but it isn't<|||||>Closing this b/c #4245 was merged
(we still need to investigate why the lowercasing is not properly populated by the model's config)
|
transformers | 4,300 | closed | Fix nn.DataParallel compatibility in PyTorch 1.5 | As reported in #3936
PyTorch 1.5 changes the handling of nn.Parameters in `DataParallel` replicas. (https://github.com/pytorch/pytorch/pull/33907). The replicas now don't have parameters() anymore.
This PR updates our `self.device` and `self.dtype` helpers to mimic `nn.Module`'s `parameters()` helper but for attributes, i.e. recursively look for an attribute of type Tensor.
Reformer and TransfoXL seem to be doing fancy things based on the module's Parameters so I didn't attempt to fix them.
Finally I'm introducing a `multigpu` CI flag. CI does not currently run on multiple GPUs so remember to run it locally.
---
Also pinging @ngimel the author of the change in PyTorch, to check if I'm doing something stupid here. | 05-12-2020 01:25:02 | 05-12-2020 01:25:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4300?src=pr&el=h1) Report
> Merging [#4300](https://codecov.io/gh/huggingface/transformers/pull/4300?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7cb203fae4e7964e9e99400b375d660ebce765ee&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `91.66%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4300?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4300 +/- ##
==========================================
- Coverage 78.21% 78.21% -0.01%
==========================================
Files 120 120
Lines 20038 20040 +2
==========================================
+ Hits 15673 15674 +1
- Misses 4365 4366 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4300?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.73% <75.00%> (ø)` | |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `77.20% <100.00%> (ø)` | |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.82% <100.00%> (ø)` | |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.66% <100.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.98% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4300?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4300?src=pr&el=footer). Last update [7cb203f...7eef4f5](https://codecov.io/gh/huggingface/transformers/pull/4300?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Update: Also asked about this on shared [PyTorch Slack channel](https://app.slack.com/client/T2077MDKQ/CN963QVJB)<|||||>I don't see anything obviously wrong, but I'm not very familiar with your codebase. You are looking for tensor attributes - are you sure that you don't have any other tensor attributes that don't correspond to former parameters?
Finally, we also have _former_parameters in pytorch, introduced here https://github.com/pytorch/pytorch/pull/36523. You may find it useful, but we can't guarantee that it will remain as a stable API. <|||||>I think you forgot GPT-2:
`transformers/modeling_gpt2.py", line 464, in forward
attention_mask = attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
StopIteration
`<|||||>Yes, if I remember correctly I didn't try to remove all the calls to `next(self.parameters())` in this PR – do you want to open a PR to fix this? |
transformers | 4,299 | closed | Issue with XLNet using xlnet-base-cased | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLNet
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD v1.1
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Clone the repository
```
!git clone https://github.com/huggingface/transformers.git
```
2.
```
!python ./transformers/examples/question-answering/run_squad.py \
--model_type xlnet \
--model_name_or_path xlnet-base-cased \
--do_train \
--do_eval \
--train_file $SQuAD_Dir/train-v1.1.json \
--predict_file $SQuAD_Dir/dev-v1.1.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./model_output \
--per_gpu_eval_batch_size=4 \
--per_gpu_train_batch_size=4 \
--save_steps 5000
```
3. Error
```
Epoch: 0% 0/2 [00:00<?, ?it/s]
Iteration: 0% 0/15852 [00:00<?, ?it/s]Traceback (most recent call last):
File "./transformers/examples/question-answering/run_squad.py", line 830, in <module>
main()
File "./transformers/examples/question-answering/run_squad.py", line 769, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "./transformers/examples/question-answering/run_squad.py", line 204, in train
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'cls_index'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The model should start training from the first epoch.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.0
- Platform: Google Colab Pro
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101 (Yes)
- Tensorflow version (GPU?): 2.2.0 (Yes)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 05-11-2020 23:40:53 | 05-11-2020 23:40:53 | I also have the same problem @alexandrenriq <|||||>I think there's a mistake in the code which sets the `AutoModelForQuestionAnswering` as `XLNetForQuestionAnsweringSimple`. You can run the code by substituting `AutoModelForQuestionAnswering` into `XLNetForQuestionAnswering` (or by removing `cls_index` and `p_mask` in the batches to make `XLNetForQuestionAnsweringSimple` work).
Nevertheless, after the training is done, I can't reproduce the scores right (`Results: {'exact': 0.03784295175023652, 'f1': 0.6317807409886281,...`).<|||||>I also found the same issue, and I believe this issue appears for a very very long time. <|||||>see also #3535<|||||>> see also #3535
@brettkoonce I saw it, it didn't work. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,298 | closed | Fix BART tests on GPU | 05-11-2020 23:36:29 | 05-11-2020 23:36:29 | ||
transformers | 4,297 | closed | pin TF to 2.1 | 05-11-2020 22:56:06 | 05-11-2020 22:56:06 | ||
transformers | 4,296 | closed | Error Training T5 Transformer Translation Model | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
We are trying to train a T5 model to translate between English, Modern Standard Arabic, Levantine Arabic and Maghrebi Arabic. We are using Tensorflow to process the code as well as the default Sentence Piece tokenizer and custom Sentence Piece Tokenizer both set to the default Vocab size of 32,128. Below is the hyper parameters we used for training the model and one of the languages T5 Data Task Registry.
```
t5.data.TaskRegistry.remove("mag_msa_translation")
t5.data.TaskRegistry.add(
#name of the Task
"mag_msa_translation",
#Supply a function which returns a tf.data.Dataset
dataset_fn=mag_msa_translation_dataset_fn,
splits=["train", "validation"],
# Supply a function which preprocesses text from the tf.data.Dataset.
text_preprocessor=[mag_msa_translation_preprocessor],
# Use the same vocabulary that we used for pre-training.
sentencepiece_model_path=filepath, #str(multibpemb.model_file), #t5.data.DEFAULT_SPM_PATH
# Lowercase targets before computing metrics.
postprocess_fn = t5.data.postprocessors.lower_text,
# We'll use accuracy as our evaluation metric.
metric_fns=[t5.evaluation.metrics.accuracy],
# Not required, but helps for mixing and auto-caching.
#num_input_examples=num_nq_examples
)
```
```
STEPS = 1000 #@param {type: "integer"}
model.train(
mixture_or_task_name="ar_translation",
steps=STEPS,
save_steps=100,
sequence_length={"inputs": 128, "targets": 128'},#128
split="train",
batch_size=32, #32
optimizer=functools.partial(transformers.AdamW, lr=1e-4),
)
```
Why are we receiving the error posted below?

Please let me know if you need to see more code or have any additional clarifying questions. Thank you in advance!
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 05-11-2020 22:50:07 | 05-11-2020 22:50:07 | Which library are you using? Could you maybe post a complete code example (with all relevant imports) to reproduce the error? <|||||>https://github.com/jameschartouni/arabic_translation/blob/google-t5/Model_1.ipynb
Let me know if you have difficulty viewing the code! We are using colab and use a fresh install. <|||||>I see you use the official T5 code that connects to HF code. The error seems to be happening in the `t5` library though (in `t5/models/hf_model.py`). Could you open an issue there? We haven't written the code so it's very hard for us to debug this. |
transformers | 4,295 | closed | [Docs, Notebook] Include generation pipeline | - adds the generation pipeline to usage docs with a couple of examples
- adds the generation pipeline to the notebook
- Also, I updated the config of all models that can be used with generation with model specific params: GPT2, OpenAI-GPT, XLNet, TransfoXL, Reformer, so that they now include a `text-generation` special params dict. | 05-11-2020 20:53:23 | 05-11-2020 20:53:23 | |
transformers | 4,294 | closed | Documentation specification | 05-11-2020 20:34:59 | 05-11-2020 20:34:59 | ||
transformers | 4,293 | closed | No crossattention layers in decoder from pretrained encoder-decoder model | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Custom sequence to sequence task.
## To reproduce
Steps to reproduce the behavior:
1. Run the following script:
```
from transformers import EncoderDecoderModel
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
print(model.get_decoder())
```
2. You will find there's no crossattention layers in the decoder.
## Expected behavior
Cross attention layers initialized (not weights, but the layers themselves) in the decoder.
## Environment info
- `transformers` version: 2.9.0
- Platform: Linux-5.5.6-1.el7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.3.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?:NO
| 05-11-2020 20:09:58 | 05-11-2020 20:09:58 | A simple fix could be:
1. Adding in file [modeling_encoder_decoder.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_encoder_decoder.py) @ line 140:
```
kwargs_decoder["is_decoder"]=True
```
2. Adding in file [modeling_auto.py](https://github.com/huggingface/transformers/blob/61d22f9cc7fbb0556d1e14447957a7ac6441421b/src/transformers/modeling_auto.py#L570) @ line 570
```
if not isinstance(config, PretrainedConfig):
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
if kwargs.pop("is_decoder", None):
config.is_decoder=True
```
That's is it! It should be good to go. The crossattention layers will be initialized; although it's weights will not be initialized. That will have to be trained.
<|||||>Hi @hammad001 - thanks a lot for finding this bug. It has helped me a lot. This PR: https://github.com/huggingface/transformers/pull/4680 should fix it :-) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.