url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/4513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4513/comments | https://api.github.com/repos/huggingface/transformers/issues/4513/events | https://github.com/huggingface/transformers/issues/4513 | 623,010,971 | MDU6SXNzdWU2MjMwMTA5NzE= | 4,513 | Couldn't reach server GPT-2 | {
"login": "MSMOON",
"id": 12758797,
"node_id": "MDQ6VXNlcjEyNzU4Nzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/12758797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MSMOON",
"html_url": "https://github.com/MSMOON",
"followers_url": "https://api.github.com/users/MSMOON/followers",
"following_url": "https://api.github.com/users/MSMOON/following{/other_user}",
"gists_url": "https://api.github.com/users/MSMOON/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MSMOON/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MSMOON/subscriptions",
"organizations_url": "https://api.github.com/users/MSMOON/orgs",
"repos_url": "https://api.github.com/users/MSMOON/repos",
"events_url": "https://api.github.com/users/MSMOON/events{/privacy}",
"received_events_url": "https://api.github.com/users/MSMOON/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834083927,
"node_id": "MDU6TGFiZWwxODM0MDgzOTI3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/External",
"name": "External",
"color": "fbca04",
"default": false,
"description": "Using the library with external tools (onnx, tflite, ...)"
}
] | closed | false | null | [] | [
"I also created this stack overflow question here: https://stackoverflow.com/questions/61944526/oserror-couldnt-reach-server-gpt2-config-json",
"It is odd that it works from CLI but not from within your script. That does not make a lot of sense. Can you try this?\r\n\r\n```python\r\nscorer = LMScorer.from_pretrained(\"gpt2\", force_download=True)\r\n```",
"Thanks, Bram. I tried that code but I get the same error. I believe it has something to do with 1) apache2 or ubuntu config and what I've allowed it to connect to or 2) some download I am missing because it has worked previously.\r\n\r\nI tried downloading the gpt2 config, model.bin, and vocab file but I would either get the same error as above or get this error:\r\n\r\n> ValueError: Unrecognized model name.Can be one of: gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2:",
"If you are sure that you have manually downloaded all files to the correct folder, you can disable the online look-up. This is useful if you have, as you say, network restrictions.\r\n\r\n```python\r\nscorer = LMScorer.from_pretrained(\"gpt2\", local_files_only=True)\r\n```",
"Still nothing yet. \r\n1. I downloaded everything from https://huggingface.co/gpt2#list-files\r\n2. Added files to the same directory the script is in\r\n3. changed names (e.g. gpt2-config.json to config.json)\r\n\r\nIs there anything I am missing?\r\n\r\n",
"This is likely to be a problem with the LMScorer rather than with this transformers library. Looking t the source code, it does not pass they keyword arguments down to model init. I suggest that you make an issue over at the library that you used.\r\n\r\nhttps://github.com/simonepri/lm-scorer/blob/master/lm_scorer/models/gpt2.py",
"Still nothing. I believe it is my apache2 configurations for access but I haven't figure out how yet. ",
"Closing this. See continuation here: https://github.com/simonepri/lm-scorer/issues/8#event-3386811426"
] | 1,590 | 1,590 | 1,590 | NONE | null | I have tried to use gpt2 using ubuntu and vagrant. This is the code:
`
import torch
from lm_scorer.models.auto import AutoLMScorer as LMScorer
scorer = LMScorer.from_pretrained("gpt2")
`
I get this error:
> AH01215: OSError: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json' to download pretrained model configuration file.
It has worked before but I had to reset my virtual environment and now it no longer worked. I think it has something to do with apache configurations.
Also, It works in the terminal but not in python script. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4513/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4512/comments | https://api.github.com/repos/huggingface/transformers/issues/4512/events | https://github.com/huggingface/transformers/issues/4512 | 623,002,083 | MDU6SXNzdWU2MjMwMDIwODM= | 4,512 | ValueError: TracedModules don't support parameter sharing between modules | {
"login": "catqaq",
"id": 42762740,
"node_id": "MDQ6VXNlcjQyNzYyNzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/42762740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/catqaq",
"html_url": "https://github.com/catqaq",
"followers_url": "https://api.github.com/users/catqaq/followers",
"following_url": "https://api.github.com/users/catqaq/following{/other_user}",
"gists_url": "https://api.github.com/users/catqaq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/catqaq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/catqaq/subscriptions",
"organizations_url": "https://api.github.com/users/catqaq/orgs",
"repos_url": "https://api.github.com/users/catqaq/repos",
"events_url": "https://api.github.com/users/catqaq/events{/privacy}",
"received_events_url": "https://api.github.com/users/catqaq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
}
] | closed | false | null | [] | [
"Can you give more information? This is too brief. Please post the full error that you get (also called error or stack trace) and _do not_ post it as a screenshot but use [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks) instead.",
"As I said: _use code blocks_ please. It is unclear what your comments are and what the code is. _Use those code blocks_ - it's super easy.\r\n\r\nAlso, in your original post you used PyTorch, and now you post TF code. You can't torch.jit a TF model.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # 🐛 Bug
## Information
Language I am using the model on English:
## To reproduce
Steps to reproduce the behavior:
1.run the "Quick tour" code:
```
import torch
from transformers import *
# Transformers has a unified API
# for 10 transformer architectures and 30 pretrained weights.
# Model | Tokenizer | Pretrained weights shortcut
MODELS = [(BertModel, BertTokenizer, 'bert-base-uncased'),
(OpenAIGPTModel, OpenAIGPTTokenizer, 'openai-gpt'),
(GPT2Model, GPT2Tokenizer, 'gpt2'),
(CTRLModel, CTRLTokenizer, 'ctrl'),
(TransfoXLModel, TransfoXLTokenizer, 'transfo-xl-wt103'),
(XLNetModel, XLNetTokenizer, 'xlnet-base-cased'),
(XLMModel, XLMTokenizer, 'xlm-mlm-enfr-1024'),
(DistilBertModel, DistilBertTokenizer, 'distilbert-base-cased'),
(RobertaModel, RobertaTokenizer, 'roberta-base'),
(XLMRobertaModel, XLMRobertaTokenizer, 'xlm-roberta-base'),
]
# To use TensorFlow 2.0 versions of the models, simply prefix the class names with 'TF', e.g. `TFRobertaModel` is the TF 2.0 counterpart of the PyTorch model `RobertaModel`
# Let's encode some text in a sequence of hidden-states using each model:
for model_class, tokenizer_class, pretrained_weights in MODELS:
# Load pretrained model/tokenizer
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
# Encode text
input_ids = torch.tensor([tokenizer.encode("Here is some text to encode", add_special_tokens=True)]) # Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model.
with torch.no_grad():
last_hidden_states = model(input_ids)[0] # Models outputs are now tuples
# Each architecture is provided with several class for fine-tuning on down-stream tasks, e.g.
BERT_MODEL_CLASSES = [BertModel, BertForPreTraining, BertForMaskedLM, BertForNextSentencePrediction,
BertForSequenceClassification, BertForTokenClassification, BertForQuestionAnswering]
# All the classes for an architecture can be initiated from pretrained weights for this architecture
# Note that additional weights added for fine-tuning are only initialized
# and need to be trained on the down-stream task
pretrained_weights = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(pretrained_weights)
for model_class in BERT_MODEL_CLASSES:
# Load pretrained model/tokenizer
model = model_class.from_pretrained(pretrained_weights)
# Models can return full list of hidden-states & attentions weights at each layer
model = model_class.from_pretrained(pretrained_weights,
output_hidden_states=True,
output_attentions=True)
input_ids = torch.tensor([tokenizer.encode("Let's see all hidden-states and attentions on this text")])
all_hidden_states, all_attentions = model(input_ids)[-2:]
# Models are compatible with Torchscript
model = model_class.from_pretrained(pretrained_weights, torchscript=True)
traced_model = torch.jit.trace(model, (input_ids,))
# Simple serialization for models and tokenizers
model.save_pretrained('./directory/to/save/') # save
model = model_class.from_pretrained('./directory/to/save/') # re-load
tokenizer.save_pretrained('./directory/to/save/') # save
tokenizer = BertTokenizer.from_pretrained('./directory/to/save/') # re-load
# SOTA examples for GLUE, SQUAD, text generation...
```
2. Encountered the bug:
File "/home/**/anaconda3/envs/dl/lib/python3.6/site-packages/torch/jit/__init__.py", line 1860, in check_unique
raise ValueError("TracedModules don't support parameter sharing between modules")
ValueError: TracedModules don't support parameter sharing between modules
## Environment info
- `transformers` version:2.9.1
- Platform: ubuntu 16.04
- Python version: 3.6
- PyTorch version (GPU?):1.2.0 GPU
- Tensorflow version (GPU?):2.0.0 GPU
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4512/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4511/comments | https://api.github.com/repos/huggingface/transformers/issues/4511/events | https://github.com/huggingface/transformers/issues/4511 | 622,904,022 | MDU6SXNzdWU2MjI5MDQwMjI= | 4,511 | AttributeError: 'SummaryWriter' object has no attribute 'add_hparams' | {
"login": "zhuqunxi",
"id": 22273557,
"node_id": "MDQ6VXNlcjIyMjczNTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/22273557?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuqunxi",
"html_url": "https://github.com/zhuqunxi",
"followers_url": "https://api.github.com/users/zhuqunxi/followers",
"following_url": "https://api.github.com/users/zhuqunxi/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuqunxi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuqunxi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuqunxi/subscriptions",
"organizations_url": "https://api.github.com/users/zhuqunxi/orgs",
"repos_url": "https://api.github.com/users/zhuqunxi/repos",
"events_url": "https://api.github.com/users/zhuqunxi/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuqunxi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
}
] | closed | false | null | [] | [
"Please fill out the template. It is there for a reason. It isn't even clear whether you use your own scripts or ours. _Fill out the template._\r\n\r\nSee this question, which might help: https://github.com/lanpa/tensorboardX/issues/502",
"hello when i use \r\n`python run_language_modeling.py \\\r\n --output_dir=chinese_finetuned_lm \\\r\n --model_type=bert \\\r\n --model_name_or_path=bert-base-chinese \\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE \\\r\n --mlm\r\n`\r\n i find the same error\r\n`Traceback (most recent call last):\r\n File \"run_language_modeling.py\", line 281, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 245, in main\r\n trainer.train(model_path=model_path)\r\n File \"/home/zhongqi/anaconda3/envs/transformers_bert/lib/python3.6/site-packages/transformers/trainer.py\", line 418, in train\r\n self.tb_writer.add_hparams(self.args.to_sanitized_dict(), metric_dict={})\r\nAttributeError: 'SummaryWriter' object has no attribute 'add_hparams'`\r\nhow to deal with it? and my protobuf is 3.12.1",
"@Mozen Can you update to the latest transformers? Many things have changed - we now use a custom trainer class for the example scripts. Let me know whether that helps!",
"@BramVanroy i used is already the latest version, and my torch version is 1.1.0, Related to this?",
"> hello when i use\r\n> `python run_language_modeling.py \\ --output_dir=chinese_finetuned_lm \\ --model_type=bert \\ --model_name_or_path=bert-base-chinese \\ --do_train \\ --train_data_file=$TRAIN_FILE \\ --do_eval \\ --eval_data_file=$TEST_FILE \\ --mlm `\r\n> i find the same error\r\n> `Traceback (most recent call last): File \"run_language_modeling.py\", line 281, in <module> main() File \"run_language_modeling.py\", line 245, in main trainer.train(model_path=model_path) File \"/home/zhongqi/anaconda3/envs/transformers_bert/lib/python3.6/site-packages/transformers/trainer.py\", line 418, in train self.tb_writer.add_hparams(self.args.to_sanitized_dict(), metric_dict={}) AttributeError: 'SummaryWriter' object has no attribute 'add_hparams'`\r\n> how to deal with it? and my protobuf is 3.12.1\r\n\r\nI found 'add_hparams'` only exsiting in torch >1.3.1, so I update the version of torch, the problem is solved! Moreover, when torch >1.3.1, you should update the version of cuda, at least >= cuda 9.2.",
"> @BramVanroy i used is already the latest version, and my torch version is 1.1.0, Related to this?\r\n\r\nshould update torch >1.3.1",
"@zhuqunxi OK thanks",
"> @Mozen Can you update to the latest transformers? Many things have changed - we now use a custom trainer class for the example scripts. Let me know whether that helps!\r\n\r\nThanks for helping me. I have fixed this problem by myself.",
"If you had followed the template, and posted all the requested information such as your environment, this would have been solved much more quickly.",
"> If you had followed the template, and posted all the requested information such as your environment, this would have been solved much more quickly.\r\n\r\nAwesome,thanks for your advice. I really need to learn how to ask questions.",
"> > @BramVanroy i used is already the latest version, and my torch version is 1.1.0, Related to this?\r\n> \r\n> should update torch >1.3.1\r\n\r\nUpgrading `torch` should not be the ideal solution. The issue arises because of differences in `SummaryWriter` from `torch.utils.tensorboard` and `tensorboardX` in [transformers/trainer.py](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L46). With following environment:\r\n```\r\nprotobuf 3.12.1\r\ntensorboard 2.2.1\r\ntensorboard-plugin-wit 1.6.0.post3\r\ntensorboardX 2.0+022f060\r\ntorch 1.1.0\r\ntransformers 2.10.0\r\n```\r\nit is easy to see:\r\n```\r\n>>> from tensorboardX import SummaryWriter as SummaryWriter_tbX\r\n>>> from torch.utils.tensorboard import SummaryWriter\r\n>>>\r\n>>> writer = SummaryWriter_tbX()\r\n>>> writer.add_hparams({'lr': 1e-5, 'bsize': 20, 'n_hidden': 100}, {'accuracy': 0, 'loss': 0})\r\n>>>\r\n>>> writer = SummaryWriter()\r\n>>> writer.add_hparams({'lr': 1e-5, 'bsize': 20, 'n_hidden': 100}, {'accuracy': 0, 'loss': 0})\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nAttributeError: 'SummaryWriter' object has no attribute 'add_hparams'\r\n```\r\n(minimal code from: https://github.com/lanpa/tensorboardX/issues/502#issue-486036833)\r\nAlso, passing `tb_writer=None` explicitly to `Trainer` does not ignore using the tensorboard because of [this](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L202). I think it might be more convenient if user is allowed the option to use/ignore tensorboard and further, `tensorboardX` should probably be first in `try:except` block when importing the `SummaryWriter` as it is easier to upgrade it than `torch` (@BramVanroy ).",
"@suamin The `Trainer` requires torch 1.3.1+, we'll make sure to mention this in the README.",
"Hello, I got this error even if I have a torch version = 1.5.1, I don't know why\r\n\r\nI0717 09:08:45.343556 139953119131392 trainer.py:208] You are instantiating a Trainer but W&B is not installed. To use wandb logging, run `pip install wandb; wandb login` see https://docs.wandb.com/huggingface.\r\nTraceback (most recent call last):\r\n File \"run_ner.py\", line 304, in <module>\r\n main()\r\n File \"run_ner.py\", line 229, in main\r\n model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\r\n File \"/opt/conda/lib/python3.7/site-packages/transformers/trainer.py\", line 429, in train\r\n self.tb_writer.add_hparams(self.args.to_sanitized_dict(), metric_dict={})\r\nAttributeError: 'SummaryWriter' object has no attribute 'add_hparams'",
"see the source code: trainer.py\r\nyou will find that using \"from torch.utils.tensorboard import SummaryWriter\" first, if not in current torch, then use \"from tensorboardX import SummaryWriter\".\r\nSo, U need check your pytorch version. my torch 1.2.0, has \"torch.utils.tensorboard.SummaryWriter\", but it didn't has add_hparams. So you should update your pytorch.\r\nAlso, U can change \"trainer.py\" source code, force import SummaryWriter from tensorboardX\r\n\r\n",
"I fixed the error, thank you !\r\n"
] | 1,590 | 1,595 | 1,590 | NONE | null | # 🐛 Bug
## Information
Traceback (most recent call last):
File "F:/Kaggle/Hug/Colab/main.py", line 105, in <module>
trainer.train()
File "c:\programdata\anaconda3\lib\site-packages\transformers\trainer.py", line 359, in train
self.tb_writer.add_hparams(self.args.to_sanitized_dict(), metric_dict={})
AttributeError: 'SummaryWriter' object has no attribute 'add_hparams'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4511/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4510 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4510/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4510/comments | https://api.github.com/repos/huggingface/transformers/issues/4510/events | https://github.com/huggingface/transformers/pull/4510 | 622,866,382 | MDExOlB1bGxSZXF1ZXN0NDIxNjUwMTYz | 4,510 | [HUGE] Refactoring tokenizers backend - padding - truncation - pre-tokenized pipeline - fast tokenizers - tests | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=h1) Report\n> Merging [#4510](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935&el=desc) will **increase** coverage by `0.54%`.\n> The diff coverage is `92.01%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4510 +/- ##\n==========================================\n+ Coverage 76.89% 77.43% +0.54% \n==========================================\n Files 128 130 +2 \n Lines 21854 21966 +112 \n==========================================\n+ Hits 16804 17010 +206 \n+ Misses 5050 4956 -94 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.33% <ø> (ø)` | |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <ø> (+5.11%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `91.55% <91.55%> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.59% <92.59%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.14% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <100.00%> (-0.80%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.08% <100.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.82% <100.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (-2.39%)` | :arrow_down: |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/4510/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=footer). Last update [9931f81...52a30d6](https://codecov.io/gh/huggingface/transformers/pull/4510?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok, I morphed this in a large refactoring of the tokenizer code and test to make it more flexible and have a better API.\r\n\r\nHere is a summary of the changes:\r\n- there is now a new main user-facing method: `__call__` i.e. model_input = tokenizer(text, **kwargs) which should be the main entry point for converting text in model inputs in the future,\r\n- the padding/truncation logic was refactored to cover more cases and make the most common-case more natural to access\r\n- pre-tokenized inputs (e.g. for NER or POS tagging) are handled a lot better\r\n- the backend code was refactored and split in several files.\r\n\r\nThere is no breaking change in the user-facing methods (`encode`, `encode_plus`, `batch_encode_plus`, `tokenize`, `convert_XXX`). There is a breaking change in the internal method `prepare_for_model` which is now a private method `_prepare_for_model` with a simplified signature.\r\n\r\nAll the details are given in the updated description of the PR.\r\n\r\ncc @LysandreJik @julien-c @patrickvonplaten @sshleifer @mfuntowicz @yjernite @srush @mariamabarham @lhoestq @VictorSanh @jplu @stefan-it @BramVanroy ",
"I always love to see changes that improve the usability. I think using __call__ is one that can really make things easier for people to use. I also like pre-tokenized inputs a lot, since most of my data is pre-tokenized anyway. \r\n\r\nThe changes are quite big to go over, so just checking: hopefully there are very clear error messages when users choose incompatible options when running the tokenization process. Making the tokenizer easier to use by having a single entry-point is great, but not so much if it can create more user mistakes that are not clear to the user. Clear error messages are key.\r\n\r\nA feature request, that I discussed with someone before but I don't remember who, is that it would be nice if the tokenizers could have an optional `device` argument. If we use return_tensors, it should return the tensors immediately on the given devices, e.g.\r\n\r\n```python\r\nencoded_on_device = tokenizer([\"Hello world.\", \"Who likes cookies?\"], device=torch.device(\"cuda:0\"))\r\n# or\r\nencoded_on_device = tokenizer([\"Hello world.\", \"Who likes cookies?\"], device=training_args.device)\r\n```\r\n\r\nMight even allow different type of values like device integers or \"cuda\" or \"cpu\" strings, and so on.\r\n\r\nGreat job! Looking forward to using this in practice.",
"This is awesome!! Really great work and congratulations with this huge rework of the tokenizers!!!\r\n\r\nIt is a bit too huge to go through everything but as far as I can see, the way to use the tokenizers now are way more accessible, mostly the pre-tokenizerd part.\r\n\r\n> A feature request, that I discussed with someone before but I don't remember who, is that it would be nice if the tokenizers could have an optional device argument. If we use return_tensors, it should return the tensors immediately on the given devices\r\n\r\n@BramVanroy I don't think it is the place here because it is not compliant with TF :) I think that the tokenizers should stay as much framework agnostic as possible otherwise if we start to say \"if you want to use the tokenizer for PT do that, and for TF do this\" it becomes more complicated to maintain. Of course this is only my opinion nothing more :)",
"> @BramVanroy I don't think it is the place here because it is not compliant with TF :) I think that the tokenizers should stay as much framework agnostic as possible otherwise if we start to say \"if you want to use the tokenizer for PT do that, and for TF do this\" it becomes more complicated to maintain. Of course this is only my opinion nothing more :)\r\n\r\nBut that's what we do for `return_tensors` anyway, right?",
"> But that's what we do for return_tensors anyway, right?\r\n\r\nExactly, and I think the same about this parameter, it adds complexity, while this can be easily done afterward.",
"> Exactly, and I think the same about this parameter, it adds complexity, while this can be easily done afterward.\r\n\r\nIt is true that this can be done easily afterwards, but I suppose this is one of those cases: how much ease-of-use do you want your library to have while also taking into account the complexity of the library itself. My main argument is that from a usability perspective it would be awesome to be able to just provide your text to the tokenizer and you immediately get the encoded input back that you can feed to your model without having to do anything else. You then even do this:\r\n\r\n```python\r\nout = model(**tokenizer(input_text, return_tensors=\"pt\", device=device))\r\n```\r\n\r\nThis isn't pretty but it illustrates my point that it makes _usage_ very easy and also _easy to understand_. It removes a lot of booilerplate stuff that as a user you don't want to spend time on. On the other hand I definitely understand your point that this will lead to more complexity on the library's side. I'd be interested to hear other people's opinions about this.",
"> how much ease-of-use do you want your library to have while also taking into account the complexity of the library itself.\r\n\r\nThis is definitely true, I fully agree :) And what you propose makes sense as well. I would be curious to hear other opinions too ^^",
"As seen with @thomwolf, will merge this PR as soon as the tests show all green. I'm updating all the library's docstrings to showcase best practices in a second PR.",
"Thanks for the update! I was writing my own tokenizer for some special inputs and saw the implementation for the `longest_first` truncation. Is there any reason why tokens are truncated one by one? It seems more efficient to truncate the longer one to the same length as the shorter one, and then truncate the same number of tokens from both of them. In this way, we need only 3 array slices in total, saving a lot of loops. "
] | 1,590 | 1,593 | 1,592 | MEMBER | null | Fix #4015
Edit @thomwolf: I morphed this in a large refactoring of the tokenizer code and test to make it more flexible and have a better API. Here is a summary of the changes.
## Breaking change
There is no breaking change in the user-facing methods (`encode`, `encode_plus`, `batch_encode_plus`, `tokenize`, `convert_XXX`).
There is a breaking change in the internal method`prepare_for_model` which is now a private method `_prepare_for_model` with a simplified signature.
## A new main user-facing method: `__call__` i.e. `model_input = tokenizer(text, **kwargs)`
The extended encoding methods `encode_plus` and `batch_encode_plus` methods had names that could be intimidating for first-time users.
A new main entry point is created as `tokenizer.__call__` which wraps both methods. You can feed `__call__` with single examples, a pair of sentence to encode together or batches of single/pair sentences.
The signature of `__call__` is also a better fit for the 🤗nlp library when it comes to batches of pairs of sequences since the first and second elements in pair of sentences are supplied as separate arguments (see below) instead of a zipped list of pairs like in `batch_encode_plus`.
While all the previously provided methods (`encode`, `encode_plus`, `batch_encode_plus`, `tokenize`, `convert_XXX`) are still supported without breaking changes, `__call__` is now the recommended way to encode all types of inputs when `tokenizer.encode` (which only return the list of input indices for a single sentence) is not enough i.e. for every case beside simple demo purposes.
Here is how you should use this new entry point for encoding text in all the main use-cases:
```python
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
# 1. When you encode "a single sentence"
encoded_input = tokenizer("Hello I'm a single sentence")
# { 'input_ids': [101, 8667, 146, 112, 182, 170, 1423, 5650, 102],
# 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0],
# 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}
# 2. When you encode "a pair of sentences in a single input"
encoded_input = tokenizer("How old are you?", "I'm 6 years old")
# { 'input_ids': [101, 1731, 1385, 1132, 1128, 136, 102, 146, 112, 182, 127, 1201, 1385, 102],
# 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1],
# 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
# 3. When you encode "a batch of single sentences"
batch_sentences = ["Hello I'm a single sentence",
"And another sentence",
"And the very very last one"]
encoded_input = tokenizer(batch_sentences)
# { 'input_ids': [[101, 8667, 146, 112, 182, 170, 1423, 5650, 102],
# [101, 1262, 1330, 5650, 102],
# [101, 1262, 1103, 1304, 1304, 1314, 1141, 102]],
# 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0]],
# 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1],
# [1, 1, 1, 1, 1],
# [1, 1, 1, 1, 1, 1, 1, 1]]}
# You can batch (to max sequence size) and truncate (to max model length)
# with `padding`and `truncation` (see more details in the next section on padding/truncation)
encoded_input = tokenizer(batch_sentences, padding=True, truncation=True)
# { 'input_ids': [[101, 8667, 146, 112, 182, 170, 1423, 5650, 102],
# [101, 1262, 1330, 5650, 102, 0, 0, 0, 0],
# [101, 1262, 1103, 1304, 1304, 1314, 1141, 102, 0]],
# 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 0]],
# 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1],
# [1, 1, 1, 1, 1, 0, 0, 0, 0],
# [1, 1, 1, 1, 1, 1, 1, 1, 0]]}
# 4. When you encode "a batch of pair of sentences"
batch_of_second_sentences = ["I'm a sentence that goes with the first sentence",
"And I should be encoded with the second sentence",
"And I go with the very last one"]
encoded_input = tokenizer(batch_sentences,
batch_of_second_sentences,
padding=True,
truncation=True)
# { 'input_ids': [[101, 8667, 146, 112, 182, 170, 1423, 5650, 102, 146, 112, 182, 170, 5650, 1115, 2947, 1114, 1103, 1148, 5650, 102],
# [101, 1262, 1330, 5650, 102, 1262, 146, 1431, 1129, 12544, 1114, 1103, 1248, 5650, 102, 0, 0, 0, 0, 0, 0],
# [101, 1262, 1103, 1304, 1304, 1314, 1141, 102, 1262, 146, 1301, 1114, 1103, 1304, 1314, 1141, 102, 0, 0, 0, 0]],
# 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
# [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0]],
# 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
# [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
# [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0]]}
```
## Padding/truncation
The padding and truncation logic was simplified and improved to cover all the major uses-cases with the simplest possible API.
Here is how to do the two most common use-cases for truncation/padding:
```python
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
batch_sentences = ["Hello I'm a single sentence",
"And another sentence",
"And the very very last one"]
# 1. No truncation and no padding
encoded_input = tokenizer(batch_sentences)
# 2. Pad to the max sequence length inside the provided batch
# while truncating to the max input length acceptable by the model
encoded_input = tokenizer(batch_sentences, truncation=True, padding=True)
```
The new API for padding and truncation uses three arguments to the encoding methods: `padding`, `truncation` and `max_length`. This new way to specify padding/truncation is available in all the user-facing encoding methods: `encode`, `encode_plus`, `batch_ecode_plus` and the newly provided `__call__`.
All the previously provided ways to do padding/truncation (`truncation_strategy`, `max_length`, `pad_to_max_length`) are still supported without breaking changes but we recommend to use the new API.
Here are the details of all the possible inputs to `padding`, `truncation` and `max_length`:
- `padding` to control the padding (can be provided with a boolean or a string for finer-grained control). `padding` accepts the following values:
* `True` or `'longest'`: pad to the longest sequence in the batch (or no padding if only a single sequence if provided),
* `'max_length'`: pad to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`)
* `False` or `'do_not_pad'` (default): No padding (i.e. can output batch with sequences of uneven lengths)
- `truncation` to control truncation (can be provided with a boolean or a string for finer-grained control). `truncation` accepts the following values:
* `True` or `'only_first'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided,
* `'only_second'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided,
* `'longest_first'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided,
* `False` or `'do_not_truncate'` (default): No truncation (i.e. can output batch with sequences length greater than the model max admissible input size)
- `max_length` to control the length of the padding/truncation (integer or `None`). `max_length` accepts the following values:
* `None` (default): This will use the predefined model max length if required by one of the truncation/padding parameters. If the model has no specific max input length (e.g. XLNet) truncation/padding to max length is deactivated.
* `any integer value` (e.g. `42`): Use this specific maximum length value if required by one of the truncation/padding parameters.
Now here is a table summarizing the recommended way to setup `padding` and `truncation` as well as the previously provided way to do it (still supported but not recommended) in all cases.
If you use pair of inputs sequence in any of the following examples, you can replace `truncation=True` by a `STRATEGY` selected in `['only_first', 'only_second', 'longest_first']`, i.e. `truncation='only_second'` or `truncation= 'longest_first'` to control how both sequence in the pair are truncated as detailed just before the table. We don't include all these variants for the sake of keeping the table not too long.
| Truncation | Padding | Recommended way | Previously provided (still supported but not recommended)
| --- | --- | --- | --- |
| no truncation | no padding | `tokenizer(batch_sentences)` | `tokenizer.batch_encode_plus(batch_sentences)`
| no truncation | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True)` or `tokenizer(batch_sentences, padding='longest')`| `tokenizer.batch_encode_plus(batch_sentences, pad_to_max_length=True)`
| no truncation | padding to max model input length | `tokenizer(batch_sentences, padding='max_length')` | Not possible
| no truncation | padding to specific length | `tokenizer(batch_sentences, padding='max_length', max_length=42)` | Not possible
| | | | |
| truncation to max model input length | no padding | `tokenizer(batch_sentences, truncation=True)` or `tokenizer(batch_sentences, truncation=STRATEGY)` | `tokenizer.batch_encode_plus(batch_sentences, max_length=tokenizer.max_len)`
| truncation to max model input length | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True)` or `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` | Not possible
| truncation to max model input length | padding to max model input length | `tokenizer(batch_sentences, padding='max_length', truncation=True)` or `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` | `tokenizer.batch_encode_plus(batch_sentences, pad_to_max_length=True, max_length=tokenizer.max_len)`
| truncation to max model input length | padding to specific length | Not possible | Not possible
| | | | |
| truncation to specific length | no padding | `tokenizer(batch_sentences, truncation=True, max_length=42)` or `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` | `tokenizer.batch_encode_plus(batch_sentences, max_length=42)`
| truncation to specific length | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` or `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` | Not possible
| truncation to specific length | padding to max model input length | Not possible | Not possible
| truncation to specific length | padding to specific length | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` or `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` | `tokenizer.batch_encode_plus(batch_sentences, pad_to_max_length=True, max_length=42)`
## Pre-tokenized inputs
The tokenizers now accept pre-tokenized inputs, i.e. inputs which are already sliced in words. The main reason for implementing a specific track for this type of inputs is to be able to use the fast mapping methods in `tokenizers` which provide character<=>token<=>words mappings. This can be very handy to easily compute labels and extract predictions for instance for Named-Entity-Recognition (NER) or Part-of-Speech tagging (POS tagging).
If you want to use pre-tokenized inputs, just set `is_pretokenized=True` in any of the encoding methods. Here are some examples:
```python
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
batch_sentences = [["Hello", "I'm", "a", "single", "sentence"],
["And", "another", "sentence"],
["And", "the", "very", "very", "last", "one"]]
encoded_input = tokenizer(batch_sentences, is_pretokenized=True)
# Pre-tokenized inputs can be used in all cases (single/pair/batch of single/batch of pairs)
batch_of_second_sentences = ["I'm a sentence that goes with the first sentence".split(),
"And I should be encoded with the second sentence".split(),
"And I go with the very last one".split()]
encoded_input = tokenizer(batch_sentences,
batch_of_second_sentences,
is_pretokenized=True,
padding=True,
truncation=True)
```
## Verbose
A new `verbose` argument is provided in all the encoding methods to silence all the warnings related to the length of the input as well as missing special tokens (e.g. missing padding or unknown token).
## Code organization
`tokenization_utils.py` was starting to grow out of control and is now split into three files:
- `tokenization_utils.py` hosts the code for the `PreTrainedTokenizers`
- `tokenization_utils_fast.py` hosts the code for the `PreTrainedTokenizersFast`
- `tokenization_utils_base.py` hosts the common methods for `PreTrainedTokenizers` and `PreTrainedTokenizersFast` (mostly the front API) in a newly created `PretrainedTokenizerBase` as well as all the common logic for special tokens (in `SpecialMixin`) and for the outputs of the encoding (in `BatchEncoding`).
## Full testing of fast tokenizers
The fast tokenizers provided by the [tokenizers](https://github.com/huggingface/tokenizers) library are now fully tested and follow the same testing pipeline as the python (slow) tokenizers. Additional consistency tests have been added comparing the outputs of the fast and slow tokenizers under various conditions.
## TODO (following PRs)
- Serialization for Fast tokenizers
- Some edge cases for `add_tokens` on Fast tokenizers are not covered (spaces in tokens for byte-level and lower casing of the added tokens). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4510/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4510/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4510",
"html_url": "https://github.com/huggingface/transformers/pull/4510",
"diff_url": "https://github.com/huggingface/transformers/pull/4510.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4510.patch",
"merged_at": 1592255572000
} |
https://api.github.com/repos/huggingface/transformers/issues/4509 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4509/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4509/comments | https://api.github.com/repos/huggingface/transformers/issues/4509/events | https://github.com/huggingface/transformers/pull/4509 | 622,822,055 | MDExOlB1bGxSZXF1ZXN0NDIxNjE0MTMy | 4,509 | Add packaging to setup.py | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=h1) Report\n> Merging [#4509](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/865d4d595eefc8cc9cee58fec9179bd182be0e2e&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4509 +/- ##\n==========================================\n- Coverage 77.90% 77.88% -0.02% \n==========================================\n Files 123 123 \n Lines 20472 20472 \n==========================================\n- Hits 15949 15945 -4 \n- Misses 4523 4527 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/hf\\_api.py](https://codecov.io/gh/huggingface/transformers/pull/4509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `93.06% <0.00%> (-4.96%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=footer). Last update [865d4d5...ffd7187](https://codecov.io/gh/huggingface/transformers/pull/4509?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This dependency was recently added, but it was not intended. It was removed with https://github.com/huggingface/transformers/commit/10d72390c029b3f139639621fb9a3a264560e05b. Thanks for offering a fix!"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | Running `pip install -e transformers` and then `python -c "import transformers"` fails on a fresh Docker container with the error:
```bash
ModuleNotFoundError: No module named 'packaging'
Thu May 21 21:59:44 2020<stderr>:Traceback (most recent call last):
Thu May 21 21:59:44 2020<stderr>: File "/.../", line 37, in <module>
Thu May 21 21:59:44 2020<stderr>: from transformers import (
Thu May 21 21:59:44 2020<stderr>: File "/fsx/transformers/src/transformers/__init__.py", line 350, in <module>
Thu May 21 21:59:44 2020<stderr>: from .trainer import Trainer, set_seed, torch_distributed_zero_first, EvalPrediction
Thu May 21 21:59:44 2020<stderr>: File "/fsx/transformers/src/transformers/trainer.py", line 14, in <module>
Thu May 21 21:59:44 2020<stderr>: from packaging import version
```
Looks like this dependency was recently added, so adding it to setup.py requirements. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4509/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4509",
"html_url": "https://github.com/huggingface/transformers/pull/4509",
"diff_url": "https://github.com/huggingface/transformers/pull/4509.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4509.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4508 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4508/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4508/comments | https://api.github.com/repos/huggingface/transformers/issues/4508/events | https://github.com/huggingface/transformers/issues/4508 | 622,791,416 | MDU6SXNzdWU2MjI3OTE0MTY= | 4,508 | FillMaskPipeline crashes when executed on TPU | {
"login": "LeonieWeissweiler",
"id": 30300891,
"node_id": "MDQ6VXNlcjMwMzAwODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/30300891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeonieWeissweiler",
"html_url": "https://github.com/LeonieWeissweiler",
"followers_url": "https://api.github.com/users/LeonieWeissweiler/followers",
"following_url": "https://api.github.com/users/LeonieWeissweiler/following{/other_user}",
"gists_url": "https://api.github.com/users/LeonieWeissweiler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeonieWeissweiler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeonieWeissweiler/subscriptions",
"organizations_url": "https://api.github.com/users/LeonieWeissweiler/orgs",
"repos_url": "https://api.github.com/users/LeonieWeissweiler/repos",
"events_url": "https://api.github.com/users/LeonieWeissweiler/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeonieWeissweiler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
}
] | closed | false | null | [] | [
"Hello! Pipelines are not tested on TPUs yet, unfortunately, and we have not made any effort to support them on that device. We may down the road, once TPU CI is more easily available."
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
I am following the tutorial in https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=QDNgPls7_l13 and running on Google Colab using the TPU. The Pipeline object creation works fine, but when I try to run it on the example sentence, the Colab runtime crashes immediately with an unclear cause and no error message. If I remove the TPU and do not install xla, the pipeline works fine.
## To reproduce
Steps to reproduce the behavior:
```python3
!pip uninstall transformers
!git clone https://github.com/huggingface/transformers
!pip install ./transformers
VERSION = "nightly"
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="drive/My Drive/models/EsperBERTo/output/checkpoint-15000",
tokenizer="drive/My Drive/models/EsperBERTo"
)
fill_mask("La suno <mask>.")
```
Is anyone else experiencing this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4508/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4507 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4507/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4507/comments | https://api.github.com/repos/huggingface/transformers/issues/4507/events | https://github.com/huggingface/transformers/issues/4507 | 622,779,488 | MDU6SXNzdWU2MjI3Nzk0ODg= | 4,507 | Hard-coded force_download in run_squad forces expensive community download | {
"login": "mfeblowitz",
"id": 6854939,
"node_id": "MDQ6VXNlcjY4NTQ5Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6854939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfeblowitz",
"html_url": "https://github.com/mfeblowitz",
"followers_url": "https://api.github.com/users/mfeblowitz/followers",
"following_url": "https://api.github.com/users/mfeblowitz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfeblowitz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfeblowitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfeblowitz/subscriptions",
"organizations_url": "https://api.github.com/users/mfeblowitz/orgs",
"repos_url": "https://api.github.com/users/mfeblowitz/repos",
"events_url": "https://api.github.com/users/mfeblowitz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfeblowitz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052333,
"node_id": "MDU6TGFiZWwxODM0MDUyMzMz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Question%20Answering",
"name": "Ex: Question Answering",
"color": "86FFCF",
"default": false,
"description": ""
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Hi, thanks for the well-formulated question! Are you using the latest examples? When I look at the current branch, there is no force_downloading (any more) - it has been commented out:\r\n\r\nhttps://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/examples/question-answering/run_squad.py#L790\r\n\r\nhttps://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/examples/question-answering/run_squad.py#L815",
"Yes - I saw that. I have been using v2.2.1, since upgrading has broken at least one of the tasks I'm performing. Until I can debug, I guess I can limp along by sed-replacing the True in the two instances of force_download.",
"Alright. In that case, I'm closing this since it's already \"fixed\" in the recent versions. "
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
Using a community-registered model (albert, squad2) I noticed that there's no real caching going on during predict/evaluate. In an application that invokes run_squad dozens to hundreds of times, this adds significantly to processing time.
This is due to at least one of the two force_download hard-codings in the run_squad.py script. It would be best to promote the force_download option into the run_squad arguments and let the user override.
I have tested by manually modifying the force_download to be False and caching does occur (I haven't tested dirty cache refetch).
Model I am using (Bert, XLNet ...): Community-submitted Albert v2 xxlarge fine-tuned for SQuAD2 on Torch, https://huggingface.co/mfeb/albert-xxlarge-v2-squad2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task:
* [ ] my own task or dataset: (give details below)
## To reproduce
Use run_squad.py more than once on a community-installed model and see the fetch go to a different /tmp copy for each invocation.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Expect force_download to be overridable at run_squad invocation, especially for community-registered models.
<!-- A clear and concise description of what you would expect to happen. -->
Update run_squad.py to add a force_download argument (default of your choosing) and use the result in the two places force_download is hard coded.
Better might be for the default value to be determined from the nature of the model location/type (e.g., no forced download for non-local models).
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: v2.2.1
- Platform: ubuntu
- Python version: 3.7.7
- PyTorch version (GPU?): gpu 1.3.1
- Tensorflow version (GPU?): gpu 2.0.0
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: N
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4507/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4506 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4506/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4506/comments | https://api.github.com/repos/huggingface/transformers/issues/4506/events | https://github.com/huggingface/transformers/pull/4506 | 622,770,236 | MDExOlB1bGxSZXF1ZXN0NDIxNTcxNDE3 | 4,506 | [Summarization Pipeline]: Fix default tokenizer | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=h1) Report\n> Merging [#4506](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a08652772791fdaeed6f263b1a99926ca64be5dc&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4506 +/- ##\n==========================================\n- Coverage 77.83% 77.82% -0.01% \n==========================================\n Files 123 123 \n Lines 20514 20514 \n==========================================\n- Hits 15968 15966 -2 \n- Misses 4546 4548 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4506/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.11% <ø> (ø)` | |\n| [src/transformers/hf\\_api.py](https://codecov.io/gh/huggingface/transformers/pull/4506/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `93.06% <0.00%> (-4.96%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4506/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4506/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.32%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=footer). Last update [a086527...70d3058](https://codecov.io/gh/huggingface/transformers/pull/4506?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | `pipeline.tokenizer` cannot be a dict! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4506/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4506",
"html_url": "https://github.com/huggingface/transformers/pull/4506",
"diff_url": "https://github.com/huggingface/transformers/pull/4506.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4506.patch",
"merged_at": 1590184186000
} |
https://api.github.com/repos/huggingface/transformers/issues/4505 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4505/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4505/comments | https://api.github.com/repos/huggingface/transformers/issues/4505/events | https://github.com/huggingface/transformers/pull/4505 | 622,694,196 | MDExOlB1bGxSZXF1ZXN0NDIxNTA5MDk0 | 4,505 | add 2 colab notebooks | {
"login": "lavanyashukla",
"id": 12243123,
"node_id": "MDQ6VXNlcjEyMjQzMTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/12243123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lavanyashukla",
"html_url": "https://github.com/lavanyashukla",
"followers_url": "https://api.github.com/users/lavanyashukla/followers",
"following_url": "https://api.github.com/users/lavanyashukla/following{/other_user}",
"gists_url": "https://api.github.com/users/lavanyashukla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lavanyashukla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lavanyashukla/subscriptions",
"organizations_url": "https://api.github.com/users/lavanyashukla/orgs",
"repos_url": "https://api.github.com/users/lavanyashukla/repos",
"events_url": "https://api.github.com/users/lavanyashukla/events{/privacy}",
"received_events_url": "https://api.github.com/users/lavanyashukla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=h1) Report\n> Merging [#4505](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a08652772791fdaeed6f263b1a99926ca64be5dc&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4505 +/- ##\n=======================================\n Coverage 77.83% 77.84% \n=======================================\n Files 123 123 \n Lines 20514 20514 \n=======================================\n+ Hits 15968 15969 +1 \n+ Misses 4546 4545 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=footer). Last update [a086527...2293fe1](https://codecov.io/gh/huggingface/transformers/pull/4505?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Those are amazing notebooks! Would it maybe be possible to connect the Notebook \"A Step by Step Guide to Tracking Hugging Face Model Performance\" to a github account and link it from there? As it's done for other notebook ?",
"Merging for now - link can be updated at a later stage."
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4505/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4505",
"html_url": "https://github.com/huggingface/transformers/pull/4505",
"diff_url": "https://github.com/huggingface/transformers/pull/4505.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4505.patch",
"merged_at": 1590657497000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4504 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4504/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4504/comments | https://api.github.com/repos/huggingface/transformers/issues/4504/events | https://github.com/huggingface/transformers/issues/4504 | 622,691,370 | MDU6SXNzdWU2MjI2OTEzNzA= | 4,504 | SummarizationPipeline crashes | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 1841528858,
"node_id": "MDU6TGFiZWwxODQxNTI4ODU4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Summarization",
"name": "Summarization",
"color": "b6f97f",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Is this issue fixed in version 2.10.0?",
"@julien-c I still get the same error when doing\r\n\r\n```\r\nsummarizer = pipeline('summarization')\r\n```\r\n\r\nand using it to summarize.\r\n\r\nHowever the following explicitely works for me:\r\n\r\n```\r\nsummarizer = pipeline('summarization', model='bart-large-cnn', tokenizer='bart-large-cnn')\r\n```",
"Yeah that sounds like this issue. It will be fixed in the next release or you can build from source with\r\n```bash\r\ngit clone [this repo]\r\npip install -e .\r\n```",
"> Yeah that sounds like this issue. It will be fixed in the next release or you can build from source with\r\n> \r\n> ```shell\r\n> git clone [this repo]\r\n> pip install -e .\r\n> ```\r\n\r\nI have installed the package from GitHub repo but still have the same issue right now.",
"@khalilRhouma: It works for me at commit d976ef262e0b2c52363d201b2e14e5ecc42abbb3 , so you may need to `git pull` or some such. If that doesn't work I would love to see the output of \r\n```bash\r\ntransformers-cli env\r\n```\r\n\r\n\r\n",
"@sshleifer I get this error when I clone with that commit ID.\r\nKeyError: \"Unknown task summarization, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask']\"\r\n@dipanjanS Would be great to know what configuration you used",
"current master should also work.",
"@sshleifer The kernel still crashes\r\nAttaching the code.\r\n```\r\nfrom transformers import pipeline\r\nimport torch\r\n\r\n!git clone https://github.com/huggingface/transformers.git\r\n%cd transformers\r\n`!pip` install -e \".[dev]\"\r\n\r\n#summarizer = pipeline(\"summarization\")\r\nsummarizer = pipeline('summarization', model='facebook/bart-large-cnn', tokenizer='facebook/bart-large-cnn') ##Kernel dies after running this line\r\n```\r\n\r\ntransformers version - 2.11.0\r\ntorch - 1.5.0",
"Can't replicate :(.\r\nCan I see your `transformers-cli env` output?",
"How do I get that output? I'm running these on Jupyter without any virtual env",
"Got it.\r\n- `transformers` version: 2.11.0\r\n- Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.9\r\n- Python version: 3.6.10\r\n- PyTorch version (GPU?): 1.5.0 (False)\r\n- Tensorflow version (GPU?): 2.2.0 (False)\r\n- Using GPU in script?: <No>\r\n- Using distributed or parallel set-up in script?: <No>",
"@sshleifer It works finally. There was a problem with GPU allocation. Thanks for your response."
] | 1,590 | 1,592 | 1,590 | MEMBER | null | ```python
summarize = pipeline("summarization")
summarize("Sam Shleifer writes the best docstring examples in the whole world.")
```
➡️
```
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in _parse_and_tokenize(self, pad_to_max_length, *args, **kwargs)
461 # Parse arguments
462 inputs = self._args_parser(*args, **kwargs)
--> 463 inputs = self.tokenizer.batch_encode_plus(
464 inputs, add_special_tokens=True, return_tensors=self.framework, pad_to_max_length=pad_to_max_length,
465 )
AttributeError: 'dict' object has no attribute 'batch_encode_plus'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4504/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4503 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4503/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4503/comments | https://api.github.com/repos/huggingface/transformers/issues/4503/events | https://github.com/huggingface/transformers/pull/4503 | 622,648,390 | MDExOlB1bGxSZXF1ZXN0NDIxNDcyNTg2 | 4,503 | Fix convert_token_type_ids_from_sequences for fast tokenizers | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=h1) Report\n> Merging [#4503](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a08652772791fdaeed6f263b1a99926ca64be5dc&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4503 +/- ##\n==========================================\n+ Coverage 77.83% 77.86% +0.02% \n==========================================\n Files 123 123 \n Lines 20514 20526 +12 \n==========================================\n+ Hits 15968 15982 +14 \n+ Misses 4546 4544 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4503/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `95.00% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4503/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <100.00%> (+0.49%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4503/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4503/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=footer). Last update [a086527...795f44a](https://codecov.io/gh/huggingface/transformers/pull/4503?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | MEMBER | null | Before this fix, the generic version of `convert_token_type_ids_from_sequences` from `tokenizer_utils` gets called when called on a `PreTrainedTokenizerFast`. The `type_ids` for the special token are thus not included.
There is no way at the moment to get this information from the rust tokenizers, so we just use the implementation from the original python tokenizers. Tests added as well.
Thanks @dirkgr for reporting this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4503/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4503/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4503",
"html_url": "https://github.com/huggingface/transformers/pull/4503",
"diff_url": "https://github.com/huggingface/transformers/pull/4503.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4503.patch",
"merged_at": 1590165910000
} |
https://api.github.com/repos/huggingface/transformers/issues/4502 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4502/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4502/comments | https://api.github.com/repos/huggingface/transformers/issues/4502/events | https://github.com/huggingface/transformers/issues/4502 | 622,596,101 | MDU6SXNzdWU2MjI1OTYxMDE= | 4,502 | How to finetune ELECTRA on glue? | {
"login": "elyesmanai",
"id": 21007166,
"node_id": "MDQ6VXNlcjIxMDA3MTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/21007166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elyesmanai",
"html_url": "https://github.com/elyesmanai",
"followers_url": "https://api.github.com/users/elyesmanai/followers",
"following_url": "https://api.github.com/users/elyesmanai/following{/other_user}",
"gists_url": "https://api.github.com/users/elyesmanai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elyesmanai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elyesmanai/subscriptions",
"organizations_url": "https://api.github.com/users/elyesmanai/orgs",
"repos_url": "https://api.github.com/users/elyesmanai/repos",
"events_url": "https://api.github.com/users/elyesmanai/events{/privacy}",
"received_events_url": "https://api.github.com/users/elyesmanai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052574,
"node_id": "MDU6TGFiZWwxODM0MDUyNTc0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Sequence%20Classification",
"name": "Ex: Sequence Classification",
"color": "46FFCF",
"default": false,
"description": ""
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"I have a pull request here\r\nhttps://github.com/huggingface/transformers/pull/4257",
"I just cloned your repo an tried to test with my model and it keeps saying the same: \r\n\r\n\r\nCould you tell me how it's used?",
"@liuzzi's PR was merged this morning. The `ElectraForSequenceClassification` model is now available, so you can use it directly in `run_glue.py`.\r\n\r\nPlease make sure to pull the latest changes from the repo, or to wait for `v2.10` which should be released in a few hours.",
"awesome, it works perfectly, thank you very much!",
"@elyesmanai Could you please share the code for pretraining Electra from scratch?",
"I'm using the simpletransformers library for the pretraining since transformers doesn't support it yet.\r\nhere's a [link](https://towardsdatascience.com/understanding-electra-and-training-an-electra-language-model-3d33e3a9660d) to how you can do it, it's super easy.\r\nIt's built on top of transformers so you can load the model into transformers and use the rest of the lib",
"The pre-training from scratch for `transformers` is available [here](https://github.com/huggingface/transformers/pull/4656). It is being tested right now.",
"This problem happened again, when I use ELECTRA on question-answering pipeline. My Transformers version is 2.11.0.\r\n\r\n> from transformers import pipeline, AutoTokenizer, AutoModelForQuestionAnswering\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\"ahotrod/electra_large_discriminator_squad2_512\")\r\n> \r\n> model = AutoModelForQuestionAnswering.from_pretrained(\"ahotrod/electra_large_discriminator_squad2_512\")\r\n> \r\n> albert_qa = pipeline('question-answering', model=model, tokenizer=tokenizer)\r\n\r\n\r\n"
] | 1,590 | 1,594 | 1,590 | CONTRIBUTOR | null | After pretraining my own electra model, I wanted to test it out in Glue using run_glue.py.
However I got this:
```
ValueError: Unrecognized configuration class <class 'transformers.configuration_electra.ElectraConfig'> for this kind of AutoModel: AutoModelForSequenceClassification.
Model type should be one of DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, BartConfig, RobertaConfig, BertConfig, XLNetConfig, FlaubertConfig, XLMConfig.
```
After taking a look at the source code, It seems like ElectraConfig isn't available for sequence classification, is there a reason for that? Did anyone finetune electra on glue? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4502/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4501 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4501/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4501/comments | https://api.github.com/repos/huggingface/transformers/issues/4501/events | https://github.com/huggingface/transformers/issues/4501 | 622,556,769 | MDU6SXNzdWU2MjI1NTY3Njk= | 4,501 | Pipelines do not control input sequences longer than those accepted by the model | {
"login": "albarji",
"id": 9654655,
"node_id": "MDQ6VXNlcjk2NTQ2NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9654655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albarji",
"html_url": "https://github.com/albarji",
"followers_url": "https://api.github.com/users/albarji/followers",
"following_url": "https://api.github.com/users/albarji/following{/other_user}",
"gists_url": "https://api.github.com/users/albarji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albarji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albarji/subscriptions",
"organizations_url": "https://api.github.com/users/albarji/orgs",
"repos_url": "https://api.github.com/users/albarji/repos",
"events_url": "https://api.github.com/users/albarji/events{/privacy}",
"received_events_url": "https://api.github.com/users/albarji/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"Thanks for the well-structured question! It helps a lot in helping you.\r\n\r\n`pipeline` actually already accepts what you request: you can pass in a tuple for the tokenizer so that the first item is the tokenizer name and the second part is its kwargs.\r\n\r\nhttps://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/src/transformers/pipelines.py#L1784-L1790\r\n\r\nYou should be able to do something like this (not tested):\r\n\r\n```python\r\npipe = pipeline(\"sentiment-analysis\", tokenizer=('distilbert-base-uncased', {'model_max_length': 128}), model='distilbert-base-uncased')\r\n```\r\n\r\nThough it is still odd that you got an error. By default the max model length should be used... cc @LysandreJik @thomwolf \r\n\r\n",
"I think the problem is the following. Here: https://github.com/huggingface/transformers/blob/e19b978151419fe0756ba852b145fccfc96dbeb4/src/transformers/pipelines.py#L463\r\nThe input is encoded and has a length of 701 which is larger then `self.tokenizer.model_max_length` so that the forward pass of the model crashes.\r\n\r\nA simple fix would be to add a statement like:\r\n```python\r\nif inputs['input_ids'].shape[-1] > self.tokenizer.model_max_length: \r\n logger.warn(\"Input is cut....\")\r\n inputs['input_ids'] = input['input_ids'][:, :self.tokenizer.model_max_length]\r\n```, but I am not sure whether this is the best solution.\r\n\r\nI think the best solution would actually be to return a clean error message here and suggest to the user to use the option `max_length=512` for the tokenizer. The problem currently is though that when calling:\r\n\r\n```python \r\npipe(very_long_text)\r\n```\r\nno arguments for the `batch_encode_plus` function can be inserted because of two reasons:\r\n1. Current the `TextClassificationPipeline` cannot accept a mixture of `kwargs` and `args`, see https://github.com/huggingface/transformers/blob/e19b978151419fe0756ba852b145fccfc96dbeb4/src/transformers/pipelines.py#L141\r\n2. The `batch_encode_plus` function actually does not accept any **kwargs arguments currently, see https://github.com/huggingface/transformers/blob/e19b978151419fe0756ba852b145fccfc96dbeb4/src/transformers/pipelines.py#L464\r\n\r\nIMO, it would be a good idea to do a larger refactoring here where we allow the pipelines to be more flexible so that `batch_encode_plus` **kwargs can easily be inserted. @LysandreJik ",
"I too get the `RuntimeError: index out of range` error when using either the summarization or question-answering pipelines with text greater than their models' max_length. Presumably any pipeline, but I haven't tested. I've tried this without using any special models; that is, using the default model/tokenizer provided by the pipelines: `pipeline(\"summarization\")(text)`. This is after an upgrade from 2.8.0 (working) to 2.11.0. Windows 10.\r\n\r\nLMK if want further code/environment details. Figured I might just be pitching something you already know, but in case it adds any surprise-factor I'll be happy to add more details / run some more tests.",
"I've also tried the tokenizer tuple approach, but same out-of-range error:\r\n```python\r\npipeline(\"summarization\", tokenizer=('facebook/bart-large-cnn', {'model_max_length': 512}), model='facebook/bart-large-cnn')(text)\r\n# also tried:\r\n# pipeline(\"summarization\", tokenizer=('facebook/bart-large-cnn', {'max_length': 512}), model='facebook/bart-large-cnn')(text)\r\n```\r\n",
"Currently, it is not possible to use pipelines with inputs longer than the ones allowed by the model. We should soon provide automatic cutting to max length in case the input is longer than allowed.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@patrickvonplaten Hey Patrick, is there any progress on what you suggest i.e. automatically cutting to max length when the input is longer than that allowed by the model, when using pipeline.",
"You should now be able to pass `truncation=True` to the pipeline call for it to truncate sequences that are too long.",
"> You should now be able to pass `truncation=True` to the pipeline call for it to truncate sequences that are too long.\r\n\r\nHow does this work exactly? I tried passing truncation=True to the pipeline call but it did not work.",
"It is not working for me either. Code to reproduce error is below. \r\n\r\n```\r\ntext = [\"The Wallabies are going to win the RWC in 2023.\"]\r\n ner = pipeline(\r\n task=\"ner\", \r\n model=AutoModelForTokenClassification.from_pretrained(ner_model),\r\n tokenizer=AutoTokenizer.from_pretrained(ner_model),\r\n aggregation_strategy=\"average\"\r\n )\r\nner(text, trucation=True)\r\n```\r\n\r\nError message is:\r\n\r\n`_sanitize_parameters() got an unexpected keyword argument 'truncation'`\r\n\r\n",
"Hi All,\r\n\r\nAny update on this, I am still facing this issue. I tried passing the parameters(max_length=512, truncation=True) into the pipeline. But still getting the error(IndexError: index out of range in self). I have tried text classification for a sentence of length 900 and got this error.\r\n\r\nAny help will be highly appreciated. ",
"Hi,\r\n\r\nAny news about this issue? I have the same problem as the person before.",
"@Pushkinue do you have your example handy ?\r\n\r\nThe thing will depend on which pipeline you're using and the actual script."
] | 1,590 | 1,679 | 1,598 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): DistilBERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
1. Create a "sentiment-analysis" pipeline with a DistilBERT tokenizer and model
2. Prepare a string that will produce more than 512 tokens upon tokenization
3. Run the pipeline over such input string
```python
from transformers import pipeline
pipe = pipeline("sentiment-analysis", tokenizer='distilbert-base-uncased', model='distilbert-base-uncased')
very_long_text = "This is a very long text" * 100
pipe(very_long_text)
```
## Expected behavior
The pipeline should control in some way that the input string will not overflow the maximum number of tokens the model can accept, for instance by limiting the number of tokens generated in the tokenization step. The user can't control this beforehand, as the tokenizer is run by the pipeline itself and it can be hard to predict into how many tokens a given text will be broken down to.
One possible way of addressing this might be to include optional parameters in the pipeline constructor that are forwarded to the tokenizer.
The current error trace is:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-ef48faf7ffbb> in <module>
3 pipe = pipeline("sentiment-analysis", tokenizer='distilbert-base-uncased', model='distilbert-base-uncased')
4 very_long_text = "This is a very long text" * 100
----> 5 pipe(very_long_text)
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
714
715 def __call__(self, *args, **kwargs):
--> 716 outputs = super().__call__(*args, **kwargs)
717 scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=True)
718 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max().item()} for item in scores]
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
469 def __call__(self, *args, **kwargs):
470 inputs = self._parse_and_tokenize(*args, **kwargs)
--> 471 return self._forward(inputs)
472
473 def _forward(self, inputs, return_tensors=False):
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/pipelines.py in _forward(self, inputs, return_tensors)
488 with torch.no_grad():
489 inputs = self.ensure_tensor_on_device(**inputs)
--> 490 predictions = self.model(**inputs)[0].cpu()
491
492 if return_tensors:
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, labels)
609 """
610 distilbert_output = self.distilbert(
--> 611 input_ids=input_ids, attention_mask=attention_mask, head_mask=head_mask, inputs_embeds=inputs_embeds
612 )
613 hidden_state = distilbert_output[0] # (bs, seq_len, dim)
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds)
464
465 if inputs_embeds is None:
--> 466 inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim)
467 tfmr_output = self.transformer(x=inputs_embeds, attn_mask=attention_mask, head_mask=head_mask)
468 hidden_state = tfmr_output[0]
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/transformers/modeling_distilbert.py in forward(self, input_ids)
89
90 word_embeddings = self.word_embeddings(input_ids) # (bs, max_seq_length, dim)
---> 91 position_embeddings = self.position_embeddings(position_ids) # (bs, max_seq_length, dim)
92
93 embeddings = word_embeddings + position_embeddings # (bs, max_seq_length, dim)
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/anaconda3/envs/deeplearning-labs-gpu/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1482 # remove once script supports set_grad_enabled
1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1485
1486
RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows. at /tmp/pip-req-build-808afw3c/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
```
## Environment info
```
# Name Version Build Channel
_libgcc_mutex 0.1 main
_pytorch_select 0.2 gpu_0
_tflow_select 2.1.0 gpu
absl-py 0.9.0 py36_0
asn1crypto 1.3.0 py36_0
astor 0.8.0 py36_0
attrs 19.3.0 py_0
backcall 0.1.0 py36_0
blas 1.0 mkl
bleach 3.1.4 py_0
boto3 1.12.47 pypi_0 pypi
botocore 1.15.47 pypi_0 pypi
c-ares 1.15.0 h7b6447c_1001
ca-certificates 2020.1.1 0
certifi 2020.4.5.1 py36_0
cffi 1.14.0 py36h2e261b9_0
chardet 3.0.4 py36_1003
click 7.1.2 pypi_0 pypi
cloudpickle 1.3.0 py_0
cryptography 2.8 py36h1ba5d50_0
cudatoolkit 10.1.243 h6bb024c_0
cudnn 7.6.5 cuda10.1_0
cupti 10.1.168 0
cycler 0.10.0 py36_0
cytoolz 0.10.1 py36h7b6447c_0
dask-core 2.15.0 py_0
dataclasses 0.7 pypi_0 pypi
dbus 1.13.12 h746ee38_0
decorator 4.4.2 py_0
defusedxml 0.6.0 py_0
docutils 0.15.2 pypi_0 pypi
eli5 0.10.1 pypi_0 pypi
entrypoints 0.3 py36_0
expat 2.2.6 he6710b0_0
filelock 3.0.12 pypi_0 pypi
fontconfig 2.13.0 h9420a91_0
freetype 2.9.1 h8a8886c_1
gast 0.3.3 py_0
glib 2.63.1 h5a9c865_0
gmp 6.1.2 h6c8ec71_1
google-pasta 0.2.0 py_0
grpcio 1.27.2 py36hf8bcb03_0
gst-plugins-base 1.14.0 hbbd80ab_1
gstreamer 1.14.0 hb453b48_1
h5py 2.10.0 py36h7918eee_0
hdf5 1.10.4 hb1b8bf9_0
icu 58.2 h9c2bf20_1
idna 2.8 py36_0
imageio 2.8.0 py_0
importlib_metadata 1.5.0 py36_0
intel-openmp 2020.0 166
ipykernel 5.1.4 py36h39e3cac_0
ipython 7.13.0 py36h5ca1d4c_0
ipython_genutils 0.2.0 py36_0
ipywidgets 7.5.1 py_0
jedi 0.16.0 py36_1
jinja2 2.11.1 py_0
jmespath 0.9.5 pypi_0 pypi
joblib 0.14.1 py_0
jpeg 9b h024ee3a_2
json5 0.9.4 pypi_0 pypi
jsonschema 3.2.0 py36_0
jupyter 1.0.0 py36_7
jupyter_client 6.1.2 py_0
jupyter_console 6.1.0 py_0
jupyter_core 4.6.3 py36_0
jupyterlab 2.1.2 pypi_0 pypi
jupyterlab-server 1.1.4 pypi_0 pypi
keras-applications 1.0.8 py_0
keras-base 2.3.1 py36_0
keras-gpu 2.3.1 0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.1.0 py36he6710b0_0
ld_impl_linux-64 2.33.1 h53a641e_7
libedit 3.1.20181209 hc058e9b_0
libffi 3.2.1 hd88cf55_4
libgcc-ng 9.1.0 hdf63c60_0
libgfortran-ng 7.3.0 hdf63c60_0
libpng 1.6.37 hbc83047_0
libprotobuf 3.11.4 hd408876_0
libsodium 1.0.16 h1bed415_0
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.1.0 h2733197_0
libuuid 1.0.3 h1bed415_2
libxcb 1.13 h1bed415_1
libxml2 2.9.9 hea5a465_1
markdown 3.1.1 py36_0
markupsafe 1.1.1 py36h7b6447c_0
matplotlib 2.2.2 py36hb69df0a_2
mistune 0.8.4 py36h7b6447c_0
mkl 2020.0 166
mkl-service 2.3.0 py36he904b0f_0
mkl_fft 1.0.15 py36ha843d7b_0
mkl_random 1.1.0 py36hd6b4f25_0
nb_conda 2.2.1 py36_0
nb_conda_kernels 2.2.3 py36_0
nbconvert 5.6.1 py36_0
nbformat 5.0.4 py_0
ncurses 6.2 he6710b0_0
networkx 2.4 py_0
ninja 1.9.0 py36hfd86e86_0
notebook 6.0.3 py36_0
numpy 1.18.1 py36h4f9e942_0
numpy-base 1.18.1 py36hde5b4d6_1
olefile 0.46 py36_0
openssl 1.1.1g h7b6447c_0
packaging 20.3 py_0
pandas 0.23.0 py36h637b7d7_0
pandoc 2.2.3.2 0
pandocfilters 1.4.2 py36_1
parso 0.6.2 py_0
pcre 8.43 he6710b0_0
pexpect 4.8.0 py36_0
pickleshare 0.7.5 py36_0
pillow 7.0.0 py36hb39fc2d_0
pip 19.3.1 py36_0
prometheus_client 0.7.1 py_0
prompt-toolkit 3.0.4 py_0
prompt_toolkit 3.0.4 0
protobuf 3.11.4 py36he6710b0_0
ptyprocess 0.6.0 py36_0
pycparser 2.20 py_0
pygments 2.6.1 py_0
pyopenssl 19.1.0 py36_0
pyparsing 2.4.6 py_0
pyqt 5.9.2 py36h05f1152_2
pyrsistent 0.16.0 py36h7b6447c_0
pysocks 1.7.1 py36_0
python 3.6.10 hcf32534_1
python-dateutil 2.8.1 py_0
python-graphviz 0.14 pypi_0 pypi
pytorch 1.4.0 cuda101py36h02f0884_0
pytz 2019.3 py_0
pywavelets 1.1.1 py36h7b6447c_0
pyyaml 5.3.1 py36h7b6447c_0
pyzmq 18.1.1 py36he6710b0_0
qt 5.9.7 h5867ecd_1
qtconsole 4.7.3 py_0
qtpy 1.9.0 py_0
readline 8.0 h7b6447c_0
regex 2020.4.4 pypi_0 pypi
requests 2.22.0 py36_1
s3transfer 0.3.3 pypi_0 pypi
sacremoses 0.0.41 pypi_0 pypi
scikit-image 0.14.2 py36he6710b0_0
scikit-learn 0.22.1 py36hd81dba3_0
scikit-optimize 0.5.2 pypi_0 pypi
scipy 1.4.1 py36h0b6359f_0
send2trash 1.5.0 py36_0
sentencepiece 0.1.86 pypi_0 pypi
setuptools 46.1.3 py36_0
sip 4.19.8 py36hf484d3e_0
six 1.14.0 py36_0
sqlite 3.31.1 h62c20be_1
tabulate 0.8.7 pypi_0 pypi
tensorboard 1.14.0 py36hf484d3e_0
tensorflow 1.14.0 gpu_py36h3fb9ad6_0
tensorflow-base 1.14.0 gpu_py36he45bfe2_0
tensorflow-estimator 1.14.0 py_0
tensorflow-gpu 1.14.0 h0d30ee6_0
termcolor 1.1.0 py36_1
terminado 0.8.3 py36_0
testpath 0.4.4 py_0
tk 8.6.8 hbc83047_0
tokenizers 0.7.0 pypi_0 pypi
toolz 0.10.0 py_0
torchvision 0.5.0 py36_cu101 pytorch
tornado 6.0.4 py36h7b6447c_1
tqdm 4.45.0 pypi_0 pypi
traitlets 4.3.3 py36_0
transformers 2.9.1 pypi_0 pypi
urllib3 1.25.8 py36_0
wcwidth 0.1.9 py_0
webencodings 0.5.1 py36_1
werkzeug 1.0.1 py_0
wheel 0.34.2 py36_0
widgetsnbextension 3.5.1 py36_0
wrapt 1.12.1 py36h7b6447c_1
xz 5.2.5 h7b6447c_0
yaml 0.1.7 had09818_2
zeromq 4.3.1 he6710b0_3
zipp 2.2.0 py_0
zlib 1.2.11 h7b6447c_3
zstd 1.3.7 h0b5b093_0
```
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- Platform: Linux matrix 4.4.0-174-generic #204-Ubuntu SMP Wed Jan 29 06:41:01 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
- Python version: Python 3.6.10 :: Anaconda, Inc.
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4501/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/4501/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4500 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4500/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4500/comments | https://api.github.com/repos/huggingface/transformers/issues/4500/events | https://github.com/huggingface/transformers/pull/4500 | 622,541,164 | MDExOlB1bGxSZXF1ZXN0NDIxMzg2NDM1 | 4,500 | Longformer for question answering | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you add it to the automodel class too ?",
"@ibeltagy - could you also take a look here",
"Thank you @patil-suraj, this looks good. One thing I would suggest is to automatically configure the `attention_mask` with global attention on all the question tokens so that the user doesn't need to worry about it. Global attention is not important for a dataset with short documents like squad but crucial for tasks where the document is long. \r\nYou can check [here](https://github.com/allenai/longformer/blob/master/scripts/triviaqa.py#L280) how we set the global attention mask for TriviaQA. It would be good to have something similar in the forward function of `LongformerForQuestionAnswering`; \r\n\r\n```\r\nif attention_mask is None:\r\n attention_mask = some_function(input_ids) # All ones. Twos for question tokens. Zero for padding tokens\r\nelse:\r\n pass # do nothing\r\n```\r\n\r\nYou will need to assume that you know where the question is in the input, usually at the beginning of the sequence, and usually separated from the rest with a certain metatag. Maybe we need extra input from the user to specify the separator tag. \r\n\r\nNotes about the code [here](https://github.com/allenai/longformer/blob/master/scripts/triviaqa.py#L280):\r\n- you don't need the padding step, it is already implemented in `LongformerModel`\r\n- this code assumes that the question length is the same for all examples, but we can't make that assumption here",
"@ibeltagy \r\nI did try creating `attention_mask ` automatically in forward method, but as you said this involves knowing where the question is (before or after the context) and the ids of `bos` and `sep` tokens. So model will need access to `tokenizer` to get ids or they'll need to be hardcoded. If we do this then I'm not sure how it will fit with the rest of the pipeline.\r\n\r\nSo can we provide this as a utility ? Or can we do this in the tokenizer where the user can provide indices in the original string for which global attention should be applied ?\r\n\r\n@patrickvonplaten Could you provide some feedback here ?",
"good points, @patil-suraj. We already have access to `self.config.pad_token_id`, so maybe we can do the same to get access to `bos_token_id` and `sep_token_id`?",
"Just checked, `self.config.bos_token_id` and `self.config.eos_token_id` are available but not `self.config.sep_token_id`. How about adding a new argument to the forward function that specifies the separator token? This is more general because there are cases where the user wants to use a different separator token from `sep_token_id`. ",
"@ibeltagy \r\nYes,` self.config.bos_token_id` and `self.config.eos_token_id` are available. If I'm not wrong the `eos` and `sep` tokens are same for `LongformerTokenizer`.\r\nSo we can do it two ways, either make it available in `self.config` or pass explicitly to the forward function. \r\nIf we make it available in `self.config` then the existing `QuestionAnsweringPipeline` won't need to be modified, and the user can override the `self.config.sep_token_id` if its different from `sep_token_id`",
"👍 sounds good to me. ",
"Thanks @ibeltagy , I'll try this and let you know.",
"In the long run we are planning on having a combined tokenizer and model config, so IMO it would be best to add a hardcoded `config.sep_token_id` to the Longformer config.",
"Okay, so adding `sep_token_id` in `config` and assuming question is at the beginning, can we do it this way \r\n\r\n```\r\nattention_mask = torch.ones_like(input_ids)\r\n\r\nfor i in range(input_ids.shape[0]):\r\n sep_index = (input_ids[i, :] == self.config.sep_token_id).nonzero().min().item()\r\n attention_mask[i, :sep_index] = 2\r\n\r\n # set 0 for padding values if input is padded\r\n if self.config.pad_token_id in input_ids[i, :]:\r\n pad_index = (input_ids[i, :] == self.config.pad_token_id).nonzero().min().item()\r\n attention_mask[i, pad_index:] = 0\r\n```\r\n\r\ndoes this sound good to you ?",
"> Okay, so adding `sep_token_id` in `config` and assuming question is at the beginning, can we do it this way\r\n> \r\n> ```\r\n> attention_mask = torch.ones_like(input_ids)\r\n> \r\n> for i in range(input_ids.shape[0]):\r\n> sep_index = (input_ids[i, :] == self.config.sep_token_id).nonzero().min().item()\r\n> attention_mask[i, :sep_index] = 2\r\n> \r\n> # set 0 for padding values if input is padded\r\n> if self.config.pad_token_id in input_ids[i, :]:\r\n> pad_index = (input_ids[i, :] == self.config.pad_token_id).nonzero().min().item()\r\n> attention_mask[i, pad_index:] = 0\r\n> ```\r\n> \r\n> does this sound good to you ?\r\n\r\nThanks a lot for your effort here @patil-suraj !\r\n\r\n1) I would prefer to not have a `for loop` here. I think it'd be nicer to just use tensor operations, exactly like @ibeltagy implemented it here: \r\nhttps://github.com/allenai/longformer/blob/e007ba9b52c550048e5981c8385980cc84359bc4/scripts/triviaqa.py#L411\r\nI think you only have to replace `self.tokenizer.eos_token_id` with `self.config.sep_token_id`. \r\n\r\n2) No need to pad the `input_ids` with \r\n```python \r\n # set 0 for padding values if input is padded\r\n if self.config.pad_token_id in input_ids[i, :]:\r\n pad_index = (input_ids[i, :] == self.config.pad_token_id).nonzero().min().item()\r\n attention_mask[i, pad_index:] = 0\r\n```\r\n\r\nI think you can remove that part of the code because our tokenizers automatically correctly put the 0 in `attention_mask`. \r\n\r\nThinking a bit more about I'm actually not anymore 100% sure whether this function should be in the `forward()` function of the `LongformerForQuestionAnswering`. Maybe it would be better to have it in the tokenizer function...not sure...will have to think about it.\r\n\r\nLet's implement it in the forward function as suggested for now :-) ",
"Thanks, @patil-suraj ! If you don't mind, I want to suggest one more thing to add. I think it will be useful if this function alerts the user when the number of global attention tokens is large or the question is on the wrong side. It will be good to add something like: \r\n\r\n```\r\nif num of global attention positions > max(self.config.attention_window)`:\r\n logger.warning('something something')\r\n```\r\n\r\n@patrickvonplaten, I see why you are thinking it might be better to have it in the tokenizer, but I think that it can quickly get complicated because the global attention setting needs to change based on the task.",
"@ibeltagy \r\nI think we will need to alert the user about that in all `Longformer` tasks, so can we add that warning in the base `LongformerModel` instead of `LongformerForQuestionAnswering` ?\r\n\r\n@patrickvonplaten \r\nI did tried to vectorize it, but that code assumes that all the questions in the batch have same length. So I'm not sure if we can make that assumption here . Also looking at this function\r\n\r\n``` \r\ndef _get_question_end_index(self, input_ids):\r\n eos_token_indices = (input_ids == self.tokenizer.eos_token_id).nonzero()\r\n assert eos_token_indices.ndim == 2\r\n assert eos_token_indices.size(0) == 2 * input_ids.size(0)\r\n assert eos_token_indices.size(1) == 2\r\n return eos_token_indices.view(input_ids.size(0), 2, 2)[:, 0, 1]\r\n```\r\nit seems that it makes the assumption that `eos_token_id/sep_token_id` occurs twice in the input, but if we use the default `sep_token_id` then it occurs three times in the input, if we encode que and context as input pair.\r\n\r\nSo looking at all this, would it be better if we just provide this as a utility and keep the `forward` method same?",
"You are right about the variable number of global attention per batch, but it can still be vectorized,\r\n\r\n1) In the function you mentioned, the following line\r\n\r\n```\r\n return eos_token_indices.view(input_ids.size(0), 2, 2)[:, 0, 1]\r\n```\r\n\r\nneeds to change to the following because, as you said, you have 3 sep/eos tokens\r\n\r\n```\r\n return eos_token_indices.view(input_ids.size(0), 3, 2)[:, 0, 1]\r\n```\r\n\r\n2) Now given `question_end_index` you can set the `attention_mask` as follows: \r\n\r\n```\r\nquestion_end_index = question_end_index.unsqueeze(dim=1) # size: batch_size x 1\r\n# bool attention mask with True in locations of global attention\r\nattention_mask = torch.arange(input_ids.size(1)).expand_as(input_ids) < question_end_index\r\nattention_mask = attention_mask.int() + 1 # from True, False to 2, 1\r\n```\r\n",
"Thanks ! @ibeltagy ",
"Awesome work so far @patil-suraj - I think after the changes as discussed above, we can merge this :-) ",
"Oh and one small thing I forgot to add. Could you add a test for `LongformerQuestionAnswering`. I think you can more or less copy this test here: \r\nhttps://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/tests/test_modeling_bert.py#L311",
"> Awesome work so far @patil-suraj - I think after the changes as discussed above, we can merge this :-)\r\n\r\nHappy to contribute 🤗",
"> Awesome work so far @patil-suraj - I think after the changes as discussed above, we can merge this :-)\r\n\r\nAlso should we add support for `QuestionAnsweringPipeline` before merging or should that be done in another PR ?",
"> > Awesome work so far @patil-suraj - I think after the changes as discussed above, we can merge this :-)\r\n> \r\n> Also should we add support for `QuestionAnsweringPipeline` before merging or should that be done in another PR ?\r\n\r\nLet's do this in another PR :-) ",
"Ok great, I did a little change in the config @patil-suraj, but it looks good to merge for me now! @patil-suraj can you fix the code quality? It's actually quite easy to do:\r\n1) run `flake8 src/transformers/modeling_longformer.py` - it will show you exactly which lines need to be fixed. In your case, all errors are redundant white spaces.\r\nJust delete and re-add lines 791, 794 and 798 without a white space and delete the trailing (at the end of the line) white spaces in line 800.",
"@ibeltagy - ok for you to be merged? ",
"> Ok great, I did a little change in the config @patil-suraj, but it looks good to merge for me now! @patil-suraj can you fix the code quality? It's actually quite easy to do:\r\n> \r\n> 1. run `flake8 src/transformers/modeling_longformer.py` - it will show you exactly which lines need to be fixed. In your case, all errors are redundant white spaces.\r\n> Just delete and re-add lines 791, 794 and 798 without a white space and delete the trailing (at the end of the line) white spaces in line 800.\r\n\r\nSure.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=h1) Report\n> Merging [#4500](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **increase** coverage by `0.21%`.\n> The diff coverage is `94.44%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4500 +/- ##\n==========================================\n+ Coverage 77.87% 78.09% +0.21% \n==========================================\n Files 123 123 \n Lines 20566 20617 +51 \n==========================================\n+ Hits 16016 16100 +84 \n+ Misses 4550 4517 -33 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `97.40% <94.00%> (+14.45%)` | :arrow_up: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.57% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=footer). Last update [a34a989...a198607](https://codecov.io/gh/huggingface/transformers/pull/4500?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok great, all green - merging! Hope that's ok with you @ibeltagy ",
"Looks great. Thanks, @patil-suraj.",
"https://colab.research.google.com/drive/1ZwnA8NCKOM4HBvaRRpjuVmAaM--x92hN?usp=sharing\r\n\r\nTried to use the longformer with simpletransformers library and tried out the example but I am getting two different errors.\r\n\r\nthe first error with simpletransformers is an assertion-error\r\n`AssertionError: There should be exactly three separator tokens in every sample for questions answering`\r\n\r\nThe second error from example is and tensor error\r\n`TypeError: only integer tensors of a single element can be converted to an index`",
"> This PR adds `LongformerForQuestionAnswering` following `RobertaForQuestionAnswering`.\r\n> \r\n> The code is almost identical to `RobertaForQuestionAnswering`, just had to remove `head_mask` parameter from forward method of `RobertaForQuestionAnswering`.\r\n> \r\n> Also trained the model to verify this. You can check inference [here](https://colab.research.google.com/drive/1WGgYuBEzGvkvhGOrQxB94Jr1nwCPyblu?usp=sharing)\r\n> \r\n> @patrickvonplaten\r\n\r\nBtw @patil-suraj, feel free to upload the model you trained on the model hub. It's a `longformer-base-4096` fine-tuned on Squad no? It'd be great if you can upload the model: https://huggingface.co/transformers/model_sharing.html",
"> > This PR adds `LongformerForQuestionAnswering` following `RobertaForQuestionAnswering`.\r\n> > The code is almost identical to `RobertaForQuestionAnswering`, just had to remove `head_mask` parameter from forward method of `RobertaForQuestionAnswering`.\r\n> > Also trained the model to verify this. You can check inference [here](https://colab.research.google.com/drive/1WGgYuBEzGvkvhGOrQxB94Jr1nwCPyblu?usp=sharing)\r\n> > @patrickvonplaten\r\n> \r\n> Btw @patil-suraj, feel free to upload the model you trained on the model hub. It's a `longformer-base-4096` fine-tuned on Squad no? It'd be great if you can upload the model: https://huggingface.co/transformers/model_sharing.html\r\n\r\nYes, I'm training the model as we are speaking :). The previous model was trained with question at the end so I'm training it again "
] | 1,590 | 1,590 | 1,590 | MEMBER | null | This PR adds `LongformerForQuestionAnswering` following `RobertaForQuestionAnswering`.
The code is almost identical to `RobertaForQuestionAnswering`, just had to remove `head_mask` parameter from forward method of `RobertaForQuestionAnswering`.
Also trained the model to verify this. You can check inference [here](https://colab.research.google.com/drive/1WGgYuBEzGvkvhGOrQxB94Jr1nwCPyblu?usp=sharing)
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4500/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4500/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4500",
"html_url": "https://github.com/huggingface/transformers/pull/4500",
"diff_url": "https://github.com/huggingface/transformers/pull/4500.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4500.patch",
"merged_at": 1590425017000
} |
https://api.github.com/repos/huggingface/transformers/issues/4499 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4499/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4499/comments | https://api.github.com/repos/huggingface/transformers/issues/4499/events | https://github.com/huggingface/transformers/pull/4499 | 622,536,853 | MDExOlB1bGxSZXF1ZXN0NDIxMzgyOTY5 | 4,499 | [T5] Fix Cross Attention position bias | {
"login": "ZhuBaohe",
"id": 35796307,
"node_id": "MDQ6VXNlcjM1Nzk2MzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/35796307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhuBaohe",
"html_url": "https://github.com/ZhuBaohe",
"followers_url": "https://api.github.com/users/ZhuBaohe/followers",
"following_url": "https://api.github.com/users/ZhuBaohe/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhuBaohe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhuBaohe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhuBaohe/subscriptions",
"organizations_url": "https://api.github.com/users/ZhuBaohe/orgs",
"repos_url": "https://api.github.com/users/ZhuBaohe/repos",
"events_url": "https://api.github.com/users/ZhuBaohe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhuBaohe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=h1) Report\n> Merging [#4499](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a08652772791fdaeed6f263b1a99926ca64be5dc&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4499 +/- ##\n==========================================\n- Coverage 77.83% 77.82% -0.02% \n==========================================\n Files 123 123 \n Lines 20514 20514 \n==========================================\n- Hits 15968 15964 -4 \n- Misses 4546 4550 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4499/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.53% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4499/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `95.16% <100.00%> (ø)` | |\n| [src/transformers/hf\\_api.py](https://codecov.io/gh/huggingface/transformers/pull/4499/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `93.06% <0.00%> (-4.96%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4499/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=footer). Last update [a086527...e9775b2](https://codecov.io/gh/huggingface/transformers/pull/4499?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi @ZhuBaohe, \r\n\r\nThansk for your PR! Can you explain a bit more in-detail what the fix is doing here? :-) ",
"@patrickvonplaten \r\n\r\nI fixes a bug that the variable **encoder_decoder_position_bias** was incorrectly assigned by cross-attention weights, not by cross-attention position bias.\r\n\r\nSee Line 745 of the file modeling_t5.py as follow:\r\n```\r\n# layer_outputs = hidden-states, -> 0\r\n key-value-states, -> 1\r\n (self-attention weights), -> 2 \r\n (self-attention position bias), -> 3 \r\n (cross-attention weights), -> 4 \r\n (cross-attention position bias) -> 5 \r\n```\r\n**encoder_decoder_position_bias** should be assigned by layer_outputs[5] instead of layer_outputs[4] .",
"Great, I agree with you. Previously the attention weights of the cross attention layer were taken instead of the bias. \r\n\r\n@LysandreJik @thomwolf I am quite surprised that we did not see an error earlier. I checked the slow tests and the summarization / translation results are equivalent as before. \r\n\r\nSo good to merge for me!",
"Surprising indeed @patrickvonplaten , I did fix a similar bug when implementing T5.\r\n\r\nWe should switch to NamedTuples one day 😄 "
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | This PR fixes the Cross Attention position bias assignment in Class T5Stack. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4499/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4499",
"html_url": "https://github.com/huggingface/transformers/pull/4499",
"diff_url": "https://github.com/huggingface/transformers/pull/4499.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4499.patch",
"merged_at": 1590497845000
} |
https://api.github.com/repos/huggingface/transformers/issues/4498 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4498/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4498/comments | https://api.github.com/repos/huggingface/transformers/issues/4498/events | https://github.com/huggingface/transformers/issues/4498 | 622,531,704 | MDU6SXNzdWU2MjI1MzE3MDQ= | 4,498 | Pre-trained electra-large model doesn't converge when fine-tuned on SST-2 | {
"login": "shanybarhom",
"id": 47103592,
"node_id": "MDQ6VXNlcjQ3MTAzNTky",
"avatar_url": "https://avatars.githubusercontent.com/u/47103592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shanybarhom",
"html_url": "https://github.com/shanybarhom",
"followers_url": "https://api.github.com/users/shanybarhom/followers",
"following_url": "https://api.github.com/users/shanybarhom/following{/other_user}",
"gists_url": "https://api.github.com/users/shanybarhom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shanybarhom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shanybarhom/subscriptions",
"organizations_url": "https://api.github.com/users/shanybarhom/orgs",
"repos_url": "https://api.github.com/users/shanybarhom/repos",
"events_url": "https://api.github.com/users/shanybarhom/events{/privacy}",
"received_events_url": "https://api.github.com/users/shanybarhom/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This is a general question and does not sound like a bug. With these large models it is sometimes hard to find a good set of hyperparameters to get the model to converge well. The same is true, and reported, for ALBERT. I don't think this is a bug.",
"@shanybarhom A good start would be to use the hyper-parameters mentioned in the [ELECTRA](https://arxiv.org/abs/2003.10555) paper :) Just refer to table 7.\r\n\r\nSo your batch size, adam epsilon and epochs are very different compared to the ELECTRA parameters.",
"Thanks, @stefan-it I've tried to use the same hyper-parameters as mentioned in the ELECTRA paper (lr=0.00005, batch_size=32, adam_epsilon=0.000001, epochs=3), but the electra-large model still doesn't converge (accuracy of ~50%). \r\n\r\n@BramVanroy I thought it is a bug since the electra-base and the electra-small converge quite quickly (90% accuracy after the first epoch) with the same code and data, while the large model is stuck on ~50% during the training, so it felt like a bug, but of course, it may not.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am facing the same problem with ELECTRA-large. I really appreciate if there is any further direction for this."
] | 1,590 | 1,637 | 1,596 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): ELECTRA
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: SST-2
## To reproduce
Steps to reproduce the behavior:
1. Load the large pre-trained ELECTRA model using ElectraModel.from_pretrained('google/electra-large-discriminator', output_hidden_states=True)
2. fine-tune it on SST-2 using a simple binary classification head (linear, ReLU, linear, Sigmoid) on top of the [CLS] hidden state with BCEWithLogitsLoss and AdamW for 3-4 epochs.
3. Model is stuck on ~55% accuracy during all training and the loss increases
Important: When I do the same with 'google/electra-base-discriminator', and 'google/electra-small-discriminator' I'm getting an accuracy of ~93% and 89% (respectively) on the first epoch.
Hyperparameters:
batch_size: 16
lr: 0.000005
adam_epsilon: 0.0000001
max_len: 32
## Expected behavior
I expected that the fine-tuned electra-large model will outperform the base and small electra model and have better results than ~50% accuracy
## Environment info
- `transformers` version: 2.9.0
- Platform: Linux-5.3.0-1017-aws-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4498/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4497 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4497/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4497/comments | https://api.github.com/repos/huggingface/transformers/issues/4497/events | https://github.com/huggingface/transformers/issues/4497 | 622,361,114 | MDU6SXNzdWU2MjIzNjExMTQ= | 4,497 | Tokenize something with a "." in between Decode these ids, you will find it mismatch | {
"login": "Xunzhuo",
"id": 48784001,
"node_id": "MDQ6VXNlcjQ4Nzg0MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48784001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xunzhuo",
"html_url": "https://github.com/Xunzhuo",
"followers_url": "https://api.github.com/users/Xunzhuo/followers",
"following_url": "https://api.github.com/users/Xunzhuo/following{/other_user}",
"gists_url": "https://api.github.com/users/Xunzhuo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xunzhuo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xunzhuo/subscriptions",
"organizations_url": "https://api.github.com/users/Xunzhuo/orgs",
"repos_url": "https://api.github.com/users/Xunzhuo/repos",
"events_url": "https://api.github.com/users/Xunzhuo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xunzhuo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Your use case seems specific, so maybe you should try a custom Tokenizer via the `tokenizers` library. I believe the results you're getting are the intended behavior.\r\n\r\nFor example, any generic sentence where someone forgets to put a space after the period would end up tokenized incorrectly otherwise:\r\n\r\n`I love lamp.No I really love lamp.` would leave you with a token `lamp.No`, which is incorrect, eh?",
"tks~ it helps a lot ~"
] | 1,590 | 1,590 | 1,590 | NONE | null | 🐛 Bug
Information
Model I am using (Bert, XLNet ...): XLNet
Language I am using the model on (English, Chinese ...): Chinese
The problem arises when using: Tokenizer
the official example scripts: (give details below) N/A
my own modified scripts: (give details below) N/A
The tasks I am working on is: Any
an official GLUE/SQUaD task: (give the name) N/A
my own task or dataset: (give details below) N/A
To reproduce
Steps to reproduce the behavior:
Load any BERT tokenizer
Tokenize something with a "." in between
Decode these ids, you will find it mismatch
x = tokenizer.encode('AN.C', add_special_tokens=False)
z = tokenizer.decode(x)
It prints:
AN. C
Expected behavior
AN.C
Environment info
transformers version:
Platform: CentOS
Python version: 3.6
PyTorch version (GPU?): GPU
Tensorflow version (GPU?): GPU
Using GPU in script?: NO
Using distributed or parallel set-up in script?: NO
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4497/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4496 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4496/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4496/comments | https://api.github.com/repos/huggingface/transformers/issues/4496/events | https://github.com/huggingface/transformers/issues/4496 | 622,355,210 | MDU6SXNzdWU2MjIzNTUyMTA= | 4,496 | python run_glue.py with the AttributeError: 'NoneType' object has no attribute 'seek' | {
"login": "zhuqunxi",
"id": 22273557,
"node_id": "MDQ6VXNlcjIyMjczNTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/22273557?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuqunxi",
"html_url": "https://github.com/zhuqunxi",
"followers_url": "https://api.github.com/users/zhuqunxi/followers",
"following_url": "https://api.github.com/users/zhuqunxi/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuqunxi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuqunxi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuqunxi/subscriptions",
"organizations_url": "https://api.github.com/users/zhuqunxi/orgs",
"repos_url": "https://api.github.com/users/zhuqunxi/repos",
"events_url": "https://api.github.com/users/zhuqunxi/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuqunxi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052574,
"node_id": "MDU6TGFiZWwxODM0MDUyNTc0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Sequence%20Classification",
"name": "Ex: Sequence Classification",
"color": "46FFCF",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"You really need to put more effort in how you ask questions. Just throwing in some error trace and leaving it up to us to figure out what you want or where things go wrong is not the way to go. Use [**code blocks**](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks) and when using an example script, post your environment (as per the **template**) and post the command that you used.\r\n\r\nIn your case it seems that you wanted to load a tensorflow model with PyTorch. That won't work. If you need to use Tensorflow models, use [run_tf_glue.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_glue.py) instead.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Have you solved it ?"
] | 1,590 | 1,686 | 1,596 | NONE | null | # 🐛 Bug
Traceback (most recent call last):
File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/torch/serialization.py", line 191, in _check_seekable
f.seek(f.tell())
AttributeError: 'NoneType' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/transformers/modeling_utils.py", line 659, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location="cpu")
File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/torch/serialization.py", line 387, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/torch/serialization.py", line 549, in _load
_check_seekable(f)
File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/torch/serialization.py", line 194, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/torch/serialization.py", line 187, in raise_err_msg
raise type(e)(msg)
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./examples/text-classification/run_glue.py", line 202, in <module>
main()
File "./examples/text-classification/run_glue.py", line 133, in main
cache_dir=model_args.cache_dir,
File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/transformers/modeling_auto.py", line 874, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/gpfs/home/bsub/anaconda3/envs/abc/lib/python3.6/site-packages/transformers/modeling_utils.py", line 662, in from_pretrained
"Unable to load weights from pytorch checkpoint file. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4496/reactions",
"total_count": 1,
"+1": 0,
"-1": 1,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4496/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4495 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4495/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4495/comments | https://api.github.com/repos/huggingface/transformers/issues/4495/events | https://github.com/huggingface/transformers/issues/4495 | 622,312,127 | MDU6SXNzdWU2MjIzMTIxMjc= | 4,495 | ❓ [BART] Why Decoder Layer Normalization is applied only at the last layer ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"That `layer_norm` should be None in `bart-large*`.\r\nSee [this comment](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L266)\r\n\r\nNo final `layer_norm` was applied before `mbart` was added, afaict."
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | # ❓ Questions & Help
It seems this line :
https://github.com/huggingface/transformers/blob/efbc1c5a9d96048ab11f8d746fe51107cb91646f/src/transformers/modeling_bart.py#L524
was modified when MBART was added.
---
Before, Layer Normalization was applied after **all** layers of the decoder (_similar to the encoder, if the config was set appropriately_).
But now, Layer Normalization is applied **only at the end**, even for other BART models (_not MBART_).
---
**Is it expected ? What's the reason behind this logic ?**
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4495/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4494 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4494/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4494/comments | https://api.github.com/repos/huggingface/transformers/issues/4494/events | https://github.com/huggingface/transformers/issues/4494 | 622,222,502 | MDU6SXNzdWU2MjIyMjI1MDI= | 4,494 | Incorporate HuggingFace 'nlp' library in examples | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834083927,
"node_id": "MDU6TGFiZWwxODM0MDgzOTI3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/External",
"name": "External",
"color": "fbca04",
"default": false,
"description": "Using the library with external tools (onnx, tflite, ...)"
}
] | closed | false | null | [] | [
"Love this idea. @thomwolf @julien-c what do you guys think about adding `nlp` as a dependency in `examples/requirements.txt`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | CONTRIBUTOR | null | # 🚀 Feature request
I propose we replace the custom data downloading/preprocessing logic found within the examples directory with the new [**HuggingFace `nlp` Library**](https://github.com/huggingface/nlp) where applicable.
## Motivation
The examples directory is filled with custom shell scripts that download and process common research datasets. These scripts work great, but are at times tricky to follow. I'm sure this can be discouraging for new users looking to try out `transformers` for the first time.
I'm hoping `nlp` will make the examples generally more accessible for both new and experienced users. And yeah...I guess it's probably not too bad for the brand either. 😉
## Your contribution
I'll get a WIP PR pushed up this weekend. I'll focus on the `pytorch_lightning` examples for now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4494/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4494/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4493 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4493/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4493/comments | https://api.github.com/repos/huggingface/transformers/issues/4493/events | https://github.com/huggingface/transformers/pull/4493 | 622,216,265 | MDExOlB1bGxSZXF1ZXN0NDIxMTI0OTg4 | 4,493 | Use args.per_gpu_train_batch_size instead of args.train_batch_size in… | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=h1) Report\n> Merging [#4493](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/865d4d595eefc8cc9cee58fec9179bd182be0e2e&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4493 +/- ##\n=======================================\n Coverage 77.90% 77.91% \n=======================================\n Files 123 123 \n Lines 20472 20472 \n=======================================\n+ Hits 15949 15950 +1 \n+ Misses 4523 4522 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.76% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=footer). Last update [865d4d5...7ffd712](https://codecov.io/gh/huggingface/transformers/pull/4493?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Things could be clearer, but the only case where train_batch_size is different from per_gpu_train_batch_size is in `nn.DataParallel`. \r\n\r\nAnd in DataParallel, your dataloader's apparent batch size will be scattered amongst the devices, so I believe the `batch_size=self.args.train_batch_size` is correct\r\n\r\n",
"(Note that DataParallel is not really recommended anymore as a way to utilize multiple GPUs, vs. torch.distributed)",
"Gotcha, thanks for the context. Is the user expected to pass both `--train_batch_size` and `--per_gpu_train_batch_size` together then? In `examples/run_language_modeling.py` as it stands, the `--train_batch_size` affects the true batch size, but `--per_gpu_train_batch_size` is what is printed to stdout here: https://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/src/transformers/trainer.py#L419",
"No, user would not pass a `--train_batch_size` – was it documented somewhere that they should?",
"My misunderstanding then. There are [a few instances](https://grep.app/search?q=--train_batch_size&filter[repo][0]=huggingface/transformers) through the codebase where that arg is expected, but I see that in this example it's a derived property. Thanks for the help, closing the issue."
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | … Trainer.
It appears that this is preferred, per https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py. This also matches the calculation which is printed referring to batch size at https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L432.
As a side note, the GPT-2 example in https://github.com/huggingface/transformers/blob/master/examples/language-modeling/README.md no longer works. There is a default `per_gpu_train_batch_size=8`, which throws OOM on a Tesla V100 with 32GB RAM. I ran it successfully with `--per_gpu_train_batch_size=1`, and it used 7GB of RAM. So we probably want to add that hyperparameter to the example command. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4493/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4493",
"html_url": "https://github.com/huggingface/transformers/pull/4493",
"diff_url": "https://github.com/huggingface/transformers/pull/4493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4493.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4492 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4492/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4492/comments | https://api.github.com/repos/huggingface/transformers/issues/4492/events | https://github.com/huggingface/transformers/issues/4492 | 622,141,936 | MDU6SXNzdWU2MjIxNDE5MzY= | 4,492 | Cannot load reformer-enwik8 tokenizer | {
"login": "erickrf",
"id": 294483,
"node_id": "MDQ6VXNlcjI5NDQ4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/294483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erickrf",
"html_url": "https://github.com/erickrf",
"followers_url": "https://api.github.com/users/erickrf/followers",
"following_url": "https://api.github.com/users/erickrf/following{/other_user}",
"gists_url": "https://api.github.com/users/erickrf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erickrf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erickrf/subscriptions",
"organizations_url": "https://api.github.com/users/erickrf/orgs",
"repos_url": "https://api.github.com/users/erickrf/repos",
"events_url": "https://api.github.com/users/erickrf/events{/privacy}",
"received_events_url": "https://api.github.com/users/erickrf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 2052904485,
"node_id": "MDU6TGFiZWwyMDUyOTA0NDg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/reformer",
"name": "reformer",
"color": "5319e7",
"default": false,
"description": "Everything related to the reformer model"
}
] | closed | false | null | [] | [
"Hi. This is not a bug but is expected: since the model works on the character level, a tokenizer is not \"required\". You can read more in [the model card](https://huggingface.co/google/reformer-enwik8#reformer-language-model-on-character-level-and-trained-on-enwik8) on how you can encode/decode your data.",
"@erickrf can you share how you got to train the \"reformer\" model. I´m trying to utilize the \"google/reformer-enwik8\" to train a Portuguese model but I just got the same error of \r\n`Model name 'google/reformer-enwik8' was not found in tokenizers`",
"@bratao I answered this in my comment... Open the link thzt I posted and scroll down. They tell you how to do tokenisation. No need to load a tokenizer as usual. ",
"@BramVanroy \r\n\r\nmy code is below \r\n```shell\r\npython examples/seq2seq/finetune_trainer.py --model_name_or_path google/reformer-enwik8 --do_train --do_eval --task translation_en_to_de --data_dir /lustre/dataset/wmt17_en_de/ --output_dir /home2/zhenggo1/checkpoint/reformer_translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate\r\n```\r\nand the bug is below,so what the reason? thks!\r\n```shell\r\nTraceback (most recent call last):\r\n File \"examples/seq2seq/finetune_trainer.py\", line 367, in <module>\r\n main()\r\n File \"examples/seq2seq/finetune_trainer.py\", line 206, in main\r\n cache_dir=model_args.cache_dir,\r\n File \"/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/models/auto/tokenization_auto.py\", line 385, in from_pretrained\r\n return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/home2/zhenggo1/LowPrecisionInferenceTool/examples/pytorch/huggingface_transformers/src/transformers/tokenization_utils_base.py\", line 1760, in from_pretrained\r\n raise EnvironmentError(msg)\r\nOSError: Can't load tokenizer for 'google/reformer-enwik8'. Make sure that:\r\n\r\n- 'google/reformer-enwik8' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'google/reformer-enwik8' is the correct path to a directory containing relevant tokenizer files\r\n\r\n```\r\n",
"@LeopoldACC Please post a new issue so that some one can have a look."
] | 1,590 | 1,615 | 1,590 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Reformer tokenizer
## To reproduce
Steps to reproduce the behavior:
1. Try to load the pretrained reformer-enwik8 tokenizer with `AutoTokenizer.from_pretrained("google/reformer-enwik8")`
This is the error I got:
```
OSError Traceback (most recent call last)
<ipython-input-51-ab9a64363cc0> in <module>
----> 1 AutoTokenizer.from_pretrained("google/reformer-enwik8")
~/.virtualenvs/sparseref/lib/python3.7/site-packages/transformers-2.9.0-py3.7.egg/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
198 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
199 else:
--> 200 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
201
202 raise ValueError(
~/.virtualenvs/sparseref/lib/python3.7/site-packages/transformers-2.9.0-py3.7.egg/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs)
896
897 """
--> 898 return cls._from_pretrained(*inputs, **kwargs)
899
900 @classmethod
~/.virtualenvs/sparseref/lib/python3.7/site-packages/transformers-2.9.0-py3.7.egg/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1001 ", ".join(s3_models),
1002 pretrained_model_name_or_path,
-> 1003 list(cls.vocab_files_names.values()),
1004 )
1005 )
OSError: Model name 'google/reformer-enwik8' was not found in tokenizers model name list (google/reformer-crime-and-punishment). We assumed 'google/reformer-enwik8' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.
```
I tried with and without `google/`, same result. However, it did print the download progress bar. Trying to load the `crime-and-punishment` Reformer tokenizer works.
- `transformers` version: 2.9.0
- Platform: macOS
- Python version: 3.7
- PyTorch version (GPU?): 1.4.0, no GPU
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4492/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4492/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4491 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4491/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4491/comments | https://api.github.com/repos/huggingface/transformers/issues/4491/events | https://github.com/huggingface/transformers/issues/4491 | 622,130,895 | MDU6SXNzdWU2MjIxMzA4OTU= | 4,491 | Windows: Can't find vocabulary file for MarianTokenizer | {
"login": "pgfeldman",
"id": 6231199,
"node_id": "MDQ6VXNlcjYyMzExOTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6231199?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pgfeldman",
"html_url": "https://github.com/pgfeldman",
"followers_url": "https://api.github.com/users/pgfeldman/followers",
"following_url": "https://api.github.com/users/pgfeldman/following{/other_user}",
"gists_url": "https://api.github.com/users/pgfeldman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pgfeldman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pgfeldman/subscriptions",
"organizations_url": "https://api.github.com/users/pgfeldman/orgs",
"repos_url": "https://api.github.com/users/pgfeldman/repos",
"events_url": "https://api.github.com/users/pgfeldman/events{/privacy}",
"received_events_url": "https://api.github.com/users/pgfeldman/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"I cannot reproduce this. This works for me (same environment except Python 3.8 which should not make a difference). Can you try again but force_overwrite potentially corrupt files?\r\n\r\n```python\r\ntok = MarianTokenizer.from_pretrained(mname, force_download=True)\r\n```",
"Hi, \n\nI rebased the transformers project just before running this and updated\nwith \"pip install --upgrade .\" in the root transformers directory. \n\nHere is the code as run:\n\nfrom transformers import MarianTokenizer, MarianMTModel\nfrom typing import List\nsrc = 'fr' # source language\ntrg = 'en' # target language\nsample_text = \"où est l'arrêt de bus ?\"\nmname = f'Helsinki-NLP/opus-mt-{src}-{trg}'\n\nmodel = MarianMTModel.from_pretrained(mname, force_download=True)\ntok = MarianTokenizer.from_pretrained(mname, force_download=True)\n\n# batch = tok.prepare_translation_batch(src_texts=[sample_text]) #\ndon't need tgt_text for inference\n# gen = model.generate(**batch) # for forward pass: model(**batch)\n# words: List[str] = tok.batch_decode(gen, skip_special_tokens=True) #\nreturns \"Where is the the bus stop ?\"\n\nHere is the terminal output:\n\n2020-05-22 05:45:15.204824: I\ntensorflow/stream_executor/platform/default/dso_loader.cc:44]\nSuccessfully opened dynamic library cudart64_101.dll \nDownloading: 100%|██████████| 1.13k/1.13k [00:00<00:00, 568kB/s] \nDownloading: 100%|██████████| 301M/301M [00:32<00:00, 9.34MB/s] \nDownloading: 100%|██████████| 802k/802k [00:00<00:00, 5.85MB/s] \nDownloading: 100%|██████████| 778k/778k [00:00<00:00, 5.71MB/s] \nDownloading: 100%|██████████| 1.34M/1.34M [00:00<00:00, 6.69MB/s] \nDownloading: 100%|██████████| 42.0/42.0 [00:00<00:00, 13.8kB/s] \nstdbuf was not found; communication with perl may hang due to stdio\nbuffering. \nTraceback (most recent call last): \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_utils.py\", line\n1055, in _from_pretrained \n tokenizer = cls(*init_inputs, **init_kwargs) \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_marian.py\",\nline 89, in __init__ \n self._setup_normalizer() \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_marian.py\",\nline 95, in _setup_normalizer \n self.punc_normalizer = MosesPunctuationNormalizer(self.source_lang) \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\mosestokenizer\\punctnormalizer.py\", line\n47, in __init__ \n super().__init__(argv) \n File \"C:\\Program Files\\Python\\lib\\site-packages\\toolwrapper.py\", line\n64, in __init__ \n self.start() \n File \"C:\\Program Files\\Python\\lib\\site-packages\\toolwrapper.py\", line\n108, in start \n env=env, \n File \"C:\\Program Files\\Python\\lib\\subprocess.py\", line 709, in\n__init__ \n restore_signals, start_new_session) \n File \"C:\\Program Files\\Python\\lib\\subprocess.py\", line 997, in\n_execute_child \n startupinfo) \nFileNotFoundError: [WinError 2] The system cannot find the file\nspecified \n\nDuring handling of the above exception, another exception occurred: \n\nTraceback (most recent call last): \n File\n\"C:/Development/Research/COVID-19-Misinfo2/src/translate_test_2.py\",\nline 9, in <module> \n tok = MarianTokenizer.from_pretrained(mname, force_download=True) \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_utils.py\", line\n902, in from_pretrained \n return cls._from_pretrained(*inputs, **kwargs) \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_utils.py\", line\n1058, in _from_pretrained \n \"Unable to load vocabulary from file. \" \nOSError: Unable to load vocabulary from file. Please check that the\nprovided vocabulary is accessible and not corrupted. \n\nProcess finished with exit code 1 \n\nI also tried this with 'Helsinki-NLP/opus-mt-ROMANCE-en' and had the\nsame results. I also stepped through the code in the debugger and\nmanually downloaded the files using my browser and pointed the\n*.from_retrained() methods to that directory. Here is the relevant code:\n\nmodel_name = 'Helsinki-NLP/opus-mt-ROMANCE-en'\n# see tokenizer.supported_language_codes for choices\nmodel =\nMarianMTModel.from_pretrained(\"./models/opus-mt-ROMANCE-en/model\")\n#model.save_pretrained(\"./models/opus-mt-ROMANCE-en/model\")\ntokenizer =\nMarianTokenizer.from_pretrained(\"./models/opus-mt-ROMANCE-en/model\")\n#tokenizer.save_pretrained(\"./models/opus-mt-ROMANCE-en/tokenizer\")\n\nAnd here is the directory list. I've also attached all these files\nexcept the pytorch.model.bin. If there is a problem with these files,\nplease send me the correct ones and I can try this locally\n\n Directory:\nC:\\Development\\Research\\COVID-19-Misinfo2\\src\\models\\opus-mt-ROMANCE-en\\model\n\n\nMode LastWriteTime Length Name \n---- ------------- ------ ---- \n-a---- 5/20/2020 5:52 PM 1163 config.json \n-a---- 5/20/2020 5:52 PM 312086495 pytorch_model.bin \n-a---- 5/20/2020 6:05 PM 800087 source.spm \n-a---- 5/20/2020 6:08 PM 265 tokenizer_config.json \n-a---- 5/20/2020 6:07 PM 1460304 vocab.json \n\nThis had the same effect as the remote download\n\n2020-05-22 05:58:34.251856: I\ntensorflow/stream_executor/platform/default/dso_loader.cc:44]\nSuccessfully opened dynamic library cudart64_101.dll \ndir = C:\\Development\\Research\\COVID-19-Misinfo2\\src \nTraceback (most recent call last): \n File\n\"C:/Development/Research/COVID-19-Misinfo2/src/translate_test_1.py\",\nline 15, in <module> \n tokenizer =\nMarianTokenizer.from_pretrained(\"./models/opus-mt-ROMANCE-en/model\") \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_utils.py\", line\n902, in from_pretrained \n return cls._from_pretrained(*inputs, **kwargs) \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_utils.py\", line\n1055, in _from_pretrained \n tokenizer = cls(*init_inputs, **init_kwargs) \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_marian.py\",\nline 84, in __init__ \n self.spm_target = load_spm(target_spm) \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_marian.py\",\nline 236, in load_spm \n spm.Load(path) \n File \"C:\\Program Files\\Python\\lib\\site-packages\\sentencepiece.py\",\nline 118, in Load \n return _sentencepiece.SentencePieceProcessor_Load(self, filename) \nTypeError: not a string \n\nProcess finished with exit code 1 \n\nI have downloaded and used the GPT-2 model without these problems using\nvery similar code\n\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\n\nHope this helps,\n\nPhil Feldman\n\n---\n\nOn 2020-05-22 05:34, Bram Vanroy wrote:\n\n> I cannot reproduce this. This works for me (same environment except Python 3.8 which should not make a difference). Can you try again but force_overwrite potentially corrupt files?\n> \n> tok = MarianTokenizer.from_pretrained(mname, force_download=True)\n> \n> --\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub [1], or unsubscribe [2].\n \n\nLinks:\n------\n[1]\nhttps://github.com/huggingface/transformers/issues/4491#issuecomment-632597198\n[2]\nhttps://github.com/notifications/unsubscribe-auth/ABPRJH7JIRH4PIEONBAXAULRSZBJXANCNFSM4NGLYESA",
"Hi @pgfeldman, I initally faced the same error but was able to resolve it by downloading the model to a specified location using the below steps\r\n```\r\ncache_dir = \"/home/transformers_files/\"\r\ncache_dir_models = cache_dir + \"default_models/\"\r\ncache_dir_tokenizers = cache_dir + \"tokenizers/\"\r\nmodel_name = 'Helsinki-NLP/opus-mt-ROMANCE-en'\r\ntokenizer = MarianTokenizer.from_pretrained(model_name, cache_dir=cache_dir_tokenizers)\r\nmodel = MarianMTModel.from_pretrained(model_name, cache_dir=cache_dir_models)\r\n```",
"Hi! I had the same issue after installing the mosestokenizer (as recommended) on Windows with Python 3.6. After I uninstalled it, it seemed to work fine! I think more investigation is needed there.",
"@BramVanroy did it work for you on windows? I also can't reproduce.",
"> @BramVanroy did it work for you on windows? I also can't reproduce.\r\n\r\nI still cannot reproduce this. I tried uninstall/reinstalling mosestokenizer and it works in both cases.\r\n\r\nFor everyone having problems, can you run the following and post its output here so that we can find similarities? @jpcorb20 @SAswinGiridhar @pgfeldman \r\n\r\n**This requires you to be on the latest master branch (on Windows at least) so install from source!**\r\n\r\n```bash\r\ntransformers-cli env\r\n```",
"I deleted and re-installed transformers and installed from source \r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two\r\nlast points. \r\n\r\n- `transformers` version: 2.11.0 \r\n- Platform: Windows-10-10.0.18362-SP0 \r\n- Python version: 3.6.4 \r\n- PyTorch version (GPU?): 1.5.0+cu101 (True) \r\n- Tensorflow version (GPU?): 2.1.0 (True) \r\n- Using GPU in script?: <fill in> \r\n- Using distributed or parallel set-up in script?: <fill in> \r\n\r\nI'm also attaching my package list\r\n[deleted by moderator for length]",
"Hello, here's mine :\r\n\r\n- `transformers` version: 2.11.0\r\n- Platform: Windows-10-10.0.18362-SP0\r\n- Python version: 3.6.7\r\n- PyTorch version (GPU?): 1.5.0+cu101 (True)\r\n- Tensorflow version (GPU?): 2.0.0 (False)\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\r\n",
"Does\r\n```python\r\ntokenizer = XLMRobertaTokenizer.from_pretrained(\"xlm-roberta-base\")\r\ntokenizer.batch_encode_plus(['stuff'])\r\n```\r\nwork?",
"Yes! \n\nHere's the code as run:\n\nfrom transformers import XLMRobertaTokenizer\n\ntokenizer = XLMRobertaTokenizer.from_pretrained(\"xlm-roberta-base\")\ntokenizer.batch_encode_plus(['stuff'])\n\nprint(\"done\")\n\nHere's the output\n\n\"C:\\Program Files\\Python\\python.exe\"\nC:/Users/Phil/AppData/Roaming/JetBrains/IntelliJIdea2020.1/scratches/transformers_error_2.py\n\n2020-06-08 17:44:17.768004: I\ntensorflow/stream_executor/platform/default/dso_loader.cc:44]\nSuccessfully opened dynamic library cudart64_101.dll \nDownloading: 100%|██████████| 5.07M/5.07M [00:00<00:00, 9.57MB/s] \ndone \n\nProcess finished with exit code 0 \n\nHope this helps, \n\nPhil\n\n---\n\nOn 2020-06-08 17:13, Sam Shleifer wrote:\n\n> Does\n> \n> tokenizer = XLMRobertaTokenizer.from_pretrained(\"xlm-roberta-base\")\n> tokenizer.batch_encode_plus(['stuff'])\n> \n> work? \n> \n> --\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub [1], or unsubscribe [2].\n \n\nLinks:\n------\n[1]\nhttps://github.com/huggingface/transformers/issues/4491#issuecomment-640889916\n[2]\nhttps://github.com/notifications/unsubscribe-auth/ABPRJHZZ3BPH7DFC36FOYJTRVVIAVANCNFSM4NGLYESA",
"Working for me too",
"Can anyone help with this issue: #5040 ?",
"> Can anyone help with this issue: #5040 ?\r\n\r\nPlease don't spam other topics like this in the future. We do our best to help where and when we can. Posting duplicate comments on different topics adds more noise than it is helpful.",
"I think this bug may be fixed on master, but I can't verify because I don't have windows. Could 1 person check and post their results? Remember to be up to date with master, your git log should contain `3d495c61e Sam Shleifer: Fix marian tokenizer save pretrained (#5043)`",
"Doesn't work on my PC, but I changed the library for the moses tokenizer in _setup_normalizer and it works:\r\n\r\n```\r\ndef _setup_normalizer(self):\r\n try:\r\n from sacremoses import MosesPunctNormalizer\r\n self.punc_normalizer = MosesPunctNormalizer(lang=self.source_lang).normalize\r\n except ImportError:\r\n warnings.warn(\"Recommended: pip install sacremoses\")\r\n self.punc_normalizer = lambda x: x\r\n```",
"Hi Sam, \n\nI just rebased, verified the gitlog, and installed using \"pip install\n--upgrade .\" I'm attaching the console record of the install. \n\nI still get the same error(s) \n\n2020-06-17 05:40:43.980254: I\ntensorflow/stream_executor/platform/default/dso_loader.cc:44]\nSuccessfully opened dynamic library cudart64_101.dll \nstdbuf was not found; communication with perl may hang due to stdio\nbuffering. \nTraceback (most recent call last): \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_utils_base.py\",\nline 1161, in _from_pretrained \n tokenizer = cls(*init_inputs, **init_kwargs) \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_marian.py\",\nline 81, in __init__ \n self._setup_normalizer() \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_marian.py\",\nline 87, in _setup_normalizer \n self.punc_normalizer = MosesPunctuationNormalizer(self.source_lang) \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\mosestokenizer\\punctnormalizer.py\", line\n47, in __init__ \n super().__init__(argv) \n File \"C:\\Program Files\\Python\\lib\\site-packages\\toolwrapper.py\", line\n64, in __init__ \n self.start() \n File \"C:\\Program Files\\Python\\lib\\site-packages\\toolwrapper.py\", line\n108, in start \n env=env, \n File \"C:\\Program Files\\Python\\lib\\subprocess.py\", line 709, in\n__init__ \n restore_signals, start_new_session) \n File \"C:\\Program Files\\Python\\lib\\subprocess.py\", line 997, in\n_execute_child \n startupinfo) \nFileNotFoundError: [WinError 2] The system cannot find the file\nspecified \n\nDuring handling of the above exception, another exception occurred: \n\nTraceback (most recent call last): \n File\n\"C:/Users/Phil/AppData/Roaming/JetBrains/IntelliJIdea2020.1/scratches/transformers_error.py\",\nline 9, in <module> \n tok = MarianTokenizer.from_pretrained(mname) \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_utils_base.py\",\nline 1008, in from_pretrained \n return cls._from_pretrained(*inputs, **kwargs) \n File \"C:\\Program\nFiles\\Python\\lib\\site-packages\\transformers\\tokenization_utils_base.py\",\nline 1164, in _from_pretrained \n \"Unable to load vocabulary from file. \" \nOSError: Unable to load vocabulary from file. Please check that the\nprovided vocabulary is accessible and not corrupted. \n\nProcess finished with exit code 1 \n\nHope this helps \n\nPhil\n\n---\n\nOn 2020-06-16 09:50, Sam Shleifer wrote:\n\n> I think this bug may be fixed on master, but I can't verify because I don't have windows. Could 1 person check and post their results? Remember to be up to date with master, your git log should contain 3d495c61e Sam Shleifer: Fix marian tokenizer save pretrained (#5043) - (HEAD -> master, upstream/master) (2 minutes ago) \n> \n> --\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub [1], or unsubscribe [2].\n \n\nLinks:\n------\n[1]\nhttps://github.com/huggingface/transformers/issues/4491#issuecomment-644778862\n[2]\nhttps://github.com/notifications/unsubscribe-auth/ABPRJH5BKWN3OBT7DOP4PVTRW52CRANCNFSM4NGLYESA",
"Just upgraded to version 3.0, and everything is working!"
] | 1,590 | 1,593 | 1,592 | NONE | null | # 🐛 Bug MarianTokenizer.from_pretrained() fails in Python 3.6.4 in Windows 10
## Information
Occurs with using the example here: [https://huggingface.co/transformers/model_doc/marian.html?highlight=marianmtmodel#transformers.MarianMTModel](huggingface.co/transformers/model_doc/marian.html?highlight=marianmtmodel#transformers.MarianMTModel)
Model I am using (Bert, XLNet ...): MarianMTModel
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below)
## To reproduce
Paste code from example and run:
```Python
from transformers import MarianTokenizer, MarianMTModel
from typing import List
src = 'fr' # source language
trg = 'en' # target language
sample_text = "où est l'arrêt de bus ?"
mname = f'Helsinki-NLP/opus-mt-{src}-{trg}'
model = MarianMTModel.from_pretrained(mname)
tok = MarianTokenizer.from_pretrained(mname)
batch = tok.prepare_translation_batch(src_texts=[sample_text]) # don't need tgt_text for inference
gen = model.generate(**batch) # for forward pass: model(**batch)
words: List[str] = tok.batch_decode(gen, skip_special_tokens=True) # returns "Where is the the bus stop ?"
print(words)
```
Steps to reproduce the behavior:
1. Run the example
2. Program terminates on `tok = MarianTokenizer.from_pretrained(mname)`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```Python
stdbuf was not found; communication with perl may hang due to stdio buffering.
Traceback (most recent call last):
File "C:\Program Files\Python\lib\site-packages\transformers\tokenization_utils.py", line 1055, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "C:\Program Files\Python\lib\site-packages\transformers\tokenization_marian.py", line 89, in __init__
self._setup_normalizer()
File "C:\Program Files\Python\lib\site-packages\transformers\tokenization_marian.py", line 95, in _setup_normalizer
self.punc_normalizer = MosesPunctuationNormalizer(self.source_lang)
File "C:\Program Files\Python\lib\site-packages\mosestokenizer\punctnormalizer.py", line 47, in __init__
super().__init__(argv)
File "C:\Program Files\Python\lib\site-packages\toolwrapper.py", line 64, in __init__
self.start()
File "C:\Program Files\Python\lib\site-packages\toolwrapper.py", line 108, in start
env=env,
File "C:\Program Files\Python\lib\subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "C:\Program Files\Python\lib\subprocess.py", line 997, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Development/Research/COVID-19-Misinfo2/src/translate_test_2.py", line 9, in <module>
tok = MarianTokenizer.from_pretrained(mname)
File "C:\Program Files\Python\lib\site-packages\transformers\tokenization_utils.py", line 902, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\Program Files\Python\lib\site-packages\transformers\tokenization_utils.py", line 1058, in _from_pretrained
"Unable to load vocabulary from file. "
OSError: Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted.
```
## Expected behavior
prints ["Where is the the bus stop ?"]
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.4
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4491/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4490 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4490/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4490/comments | https://api.github.com/repos/huggingface/transformers/issues/4490/events | https://github.com/huggingface/transformers/issues/4490 | 622,130,080 | MDU6SXNzdWU2MjIxMzAwODA= | 4,490 | How to load a pruned Albert model with from_pretrained()? | {
"login": "ThomasSYT",
"id": 41875489,
"node_id": "MDQ6VXNlcjQxODc1NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/41875489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThomasSYT",
"html_url": "https://github.com/ThomasSYT",
"followers_url": "https://api.github.com/users/ThomasSYT/followers",
"following_url": "https://api.github.com/users/ThomasSYT/following{/other_user}",
"gists_url": "https://api.github.com/users/ThomasSYT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThomasSYT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThomasSYT/subscriptions",
"organizations_url": "https://api.github.com/users/ThomasSYT/orgs",
"repos_url": "https://api.github.com/users/ThomasSYT/repos",
"events_url": "https://api.github.com/users/ThomasSYT/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThomasSYT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Now I'm using head_mask instead of prune_heads. So, I didn't actually prune heads."
] | 1,590 | 1,590 | 1,590 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I pruned Albert during the fine-tuning phase. I was unable to load the pruned model after saving it. I tried using:
```
output_model_file = os.path.join(args.output_dir, "pytorch_model.bin")
model_state_dict = torch.load(output_model_file)
model = model_class.from_pretrained(args.output_dir,state_dict=model_state_dict)
```
but still got the same error:
```
File "run_glue.py", line 526, in main
model = model_class.from_pretrained(args.output_dir,state_dict=model_state_dict)
File "/home/user/.local/lib/python3.7/site-packages/transformers/modeling_utils.py", line 471, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for AlbertForSequenceClassification:
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.query.weight: copying a param with shape torch.Size([64, 768]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.query.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.key.weight: copying a param with shape torch.Size([64, 768]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.key.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.value.weight: copying a param with shape torch.Size([64, 768]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.value.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for albert.encoder.albert_layer_groups.0.albert_layers.0.attention.dense.weight: copying a param with shape torch.Size([768, 64]) from checkpoint, the shape in current model is torch.Size([768, 768]).
```
transformer version == 2.2.1
torch-1.4.0
Anybody can help?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4490/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4489 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4489/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4489/comments | https://api.github.com/repos/huggingface/transformers/issues/4489/events | https://github.com/huggingface/transformers/pull/4489 | 621,974,891 | MDExOlB1bGxSZXF1ZXN0NDIwOTI3OTQw | 4,489 | bugfix: pass on tokenizer to pipeline in load_graph_from_args | {
"login": "RensDimmendaal",
"id": 9828683,
"node_id": "MDQ6VXNlcjk4Mjg2ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9828683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RensDimmendaal",
"html_url": "https://github.com/RensDimmendaal",
"followers_url": "https://api.github.com/users/RensDimmendaal/followers",
"following_url": "https://api.github.com/users/RensDimmendaal/following{/other_user}",
"gists_url": "https://api.github.com/users/RensDimmendaal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RensDimmendaal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RensDimmendaal/subscriptions",
"organizations_url": "https://api.github.com/users/RensDimmendaal/orgs",
"repos_url": "https://api.github.com/users/RensDimmendaal/repos",
"events_url": "https://api.github.com/users/RensDimmendaal/events{/privacy}",
"received_events_url": "https://api.github.com/users/RensDimmendaal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=h1) Report\n> Merging [#4489](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/14cb5b35faeda7881341656aacf89d12a8a7e07b&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4489 +/- ##\n==========================================\n- Coverage 78.04% 78.03% -0.01% \n==========================================\n Files 123 123 \n Lines 20477 20477 \n==========================================\n- Hits 15981 15980 -1 \n- Misses 4496 4497 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=footer). Last update [14cb5b3...a5ce320](https://codecov.io/gh/huggingface/transformers/pull/4489?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Indeed ! Thanks for spotting this @RensDimmendaal "
] | 1,589 | 1,590 | 1,590 | CONTRIBUTOR | null | I think I found a small bug in the `load_graph_from_args` function in `convert_graph_to_onnx.py` as it accepts a tokenizer as input but doesn't pass it on to the pipeline inside the function.
Love the library 🤗 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4489/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4489",
"html_url": "https://github.com/huggingface/transformers/pull/4489",
"diff_url": "https://github.com/huggingface/transformers/pull/4489.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4489.patch",
"merged_at": 1590006202000
} |
https://api.github.com/repos/huggingface/transformers/issues/4488 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4488/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4488/comments | https://api.github.com/repos/huggingface/transformers/issues/4488/events | https://github.com/huggingface/transformers/pull/4488 | 621,879,693 | MDExOlB1bGxSZXF1ZXN0NDIwODQ2MzE0 | 4,488 | Make changes to german-bert vocab file more prominent | {
"login": "Timoeller",
"id": 3264870,
"node_id": "MDQ6VXNlcjMyNjQ4NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3264870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Timoeller",
"html_url": "https://github.com/Timoeller",
"followers_url": "https://api.github.com/users/Timoeller/followers",
"following_url": "https://api.github.com/users/Timoeller/following{/other_user}",
"gists_url": "https://api.github.com/users/Timoeller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Timoeller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Timoeller/subscriptions",
"organizations_url": "https://api.github.com/users/Timoeller/orgs",
"repos_url": "https://api.github.com/users/Timoeller/repos",
"events_url": "https://api.github.com/users/Timoeller/events{/privacy}",
"received_events_url": "https://api.github.com/users/Timoeller/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=h1) Report\n> Merging [#4488](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6dc52c78d8f1f96ffd9b8f8178e142b7d4a77d14&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4488 +/- ##\n==========================================\n+ Coverage 78.02% 78.04% +0.01% \n==========================================\n Files 123 123 \n Lines 20477 20477 \n==========================================\n+ Hits 15978 15982 +4 \n+ Misses 4499 4495 -4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4488/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.76% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4488/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4488/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=footer). Last update [6dc52c7...c4a85ea](https://codecov.io/gh/huggingface/transformers/pull/4488?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thanks for adding this notice"
] | 1,589 | 1,590 | 1,590 | CONTRIBUTOR | null | We have been approached by researchers because the expected behavior of their bert-base-german-cased models changed without code modifications.
- So we wanted to make the changes to the vocab more prominent in the model card
- and also support a solution where people can easily use the old version through https://huggingface.co/deepset/bert-base-german-cased-oldvocab | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4488/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4488",
"html_url": "https://github.com/huggingface/transformers/pull/4488",
"diff_url": "https://github.com/huggingface/transformers/pull/4488.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4488.patch",
"merged_at": 1590005309000
} |
https://api.github.com/repos/huggingface/transformers/issues/4487 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4487/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4487/comments | https://api.github.com/repos/huggingface/transformers/issues/4487/events | https://github.com/huggingface/transformers/pull/4487 | 621,855,973 | MDExOlB1bGxSZXF1ZXN0NDIwODI2OTAw | 4,487 | Fix slow gpu tests lysandre | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=h1) Report\n> Merging [#4487](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6dc52c78d8f1f96ffd9b8f8178e142b7d4a77d14&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4487 +/- ##\n=======================================\n Coverage 78.02% 78.03% \n=======================================\n Files 123 123 \n Lines 20477 20477 \n=======================================\n+ Hits 15978 15980 +2 \n+ Misses 4499 4497 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4487/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4487/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.76% <0.00%> (+0.23%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=footer). Last update [6dc52c7...2260280](https://codecov.io/gh/huggingface/transformers/pull/4487?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | Fixes three tests of the slow + gpu suite cc @sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4487/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4487",
"html_url": "https://github.com/huggingface/transformers/pull/4487",
"diff_url": "https://github.com/huggingface/transformers/pull/4487.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4487.patch",
"merged_at": 1589990386000
} |
https://api.github.com/repos/huggingface/transformers/issues/4486 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4486/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4486/comments | https://api.github.com/repos/huggingface/transformers/issues/4486/events | https://github.com/huggingface/transformers/issues/4486 | 621,808,990 | MDU6SXNzdWU2MjE4MDg5OTA= | 4,486 | tokenizer.vocab has not changed after using add_tokens | {
"login": "suyulan",
"id": 55616659,
"node_id": "MDQ6VXNlcjU1NjE2NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/55616659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suyulan",
"html_url": "https://github.com/suyulan",
"followers_url": "https://api.github.com/users/suyulan/followers",
"following_url": "https://api.github.com/users/suyulan/following{/other_user}",
"gists_url": "https://api.github.com/users/suyulan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suyulan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suyulan/subscriptions",
"organizations_url": "https://api.github.com/users/suyulan/orgs",
"repos_url": "https://api.github.com/users/suyulan/repos",
"events_url": "https://api.github.com/users/suyulan/events{/privacy}",
"received_events_url": "https://api.github.com/users/suyulan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"This is the expected behaviour: `len(tokenizer)` shows you the actual size (including the added tokens), whereas `.vocab_size` tells you the original size.\r\n\r\nhttps://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/src/transformers/tokenization_utils.py#L2285-L2290\r\n\r\nPS: don't forget to update your model's embeddings!\r\n\r\n```python\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I find it confusing that the vocab_size doesn't get modified. Also, the Hugging Face documentation describes `tokenizer.add_tokens` as follows: \r\n\r\n> Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary."
] | 1,589 | 1,628 | 1,596 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
When I use the add_tokens, I have the following problems
```python3
# len(tokenizer) == 30522
tokens_dict = ['[HL]']
num_added_toks = tokenizer.add_tokens(tokens_dict)
# len(tokenizer) == 30523
# But tokenizer.vocab_size == 30522
```
Should I change the dictionary by myself ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4486/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4485 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4485/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4485/comments | https://api.github.com/repos/huggingface/transformers/issues/4485/events | https://github.com/huggingface/transformers/issues/4485 | 621,731,094 | MDU6SXNzdWU2MjE3MzEwOTQ= | 4,485 | Can't find vocabulary file or is corrupted for MarianTokenizer | {
"login": "sadeqa",
"id": 22193030,
"node_id": "MDQ6VXNlcjIyMTkzMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/22193030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadeqa",
"html_url": "https://github.com/sadeqa",
"followers_url": "https://api.github.com/users/sadeqa/followers",
"following_url": "https://api.github.com/users/sadeqa/following{/other_user}",
"gists_url": "https://api.github.com/users/sadeqa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadeqa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadeqa/subscriptions",
"organizations_url": "https://api.github.com/users/sadeqa/orgs",
"repos_url": "https://api.github.com/users/sadeqa/repos",
"events_url": "https://api.github.com/users/sadeqa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadeqa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing in favour of the better formulated question here: https://github.com/huggingface/transformers/issues/4491",
"Can anyone help with this issue: #5040 ?"
] | 1,589 | 1,592 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using MarianMT:
The problem arises when using the tokenizer with from_pretrained
## To reproduce
from transformers import MarianTokenizer, MarianMTModel
src = 'fr' # source language
trg = 'en' # target language
sample_text = "où est l'arrêt de bus ?"
mname = f'Helsinki-NLP/opus-mt-{src}-{trg}'
model = MarianMTModel.from_pretrained(mname)
tok = MarianTokenizer.from_pretrained(mname)
stdbuf was not found; communication with perl may hang due to stdio buffering.
FileNotFoundError Traceback (most recent call last)
~\Anaconda3\envs\pytorch\lib\site-packages\transformers\tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1054 try:
-> 1055 tokenizer = cls(*init_inputs, **init_kwargs)
1056 except OSError:
~\Anaconda3\envs\pytorch\lib\site-packages\transformers\tokenization_marian.py in __init__(self, vocab, source_spm, target_spm, source_lang, target_lang, unk_token, eos_token, pad_token, max_len)
88
---> 89 self.punc_normalizer = MosesPunctuationNormalizer(source_lang)
90 except ImportError:
~\Anaconda3\envs\pytorch\lib\site-packages\mosestokenizer\punctnormalizer.py in __init__(self, lang)
46 argv = ["perl", program, "-b", "-l", self.lang]
---> 47 super().__init__(argv)
48
~\Anaconda3\envs\pytorch\lib\site-packages\toolwrapper.py in __init__(self, argv, encoding, start, cwd, stdbuf, stderr, env)
63 if start:
---> 64 self.start()
65
~\Anaconda3\envs\pytorch\lib\site-packages\toolwrapper.py in start(self)
107 cwd=self.cwd,
--> 108 env=env,
109 )
~\Anaconda3\envs\pytorch\lib\subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors, text)
799 errread, errwrite,
--> 800 restore_signals, start_new_session)
801 except:
~\Anaconda3\envs\pytorch\lib\subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_start_new_session)
1206 os.fspath(cwd) if cwd is not None else None,
-> 1207 startupinfo)
1208 finally:
FileNotFoundError: [WinError 2] The system cannot find the file specified
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-10-56dab20251f1> in <module>
6 mname = f'Helsinki-NLP/opus-mt-{src}-{trg}'
7 model = MarianMTModel.from_pretrained(mname)
----> 8 tok = MarianTokenizer.from_pretrained(mname)
~\Anaconda3\envs\pytorch\lib\site-packages\transformers\tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs)
900
901 """
--> 902 return cls._from_pretrained(*inputs, **kwargs)
903
904 @classmethod
~\Anaconda3\envs\pytorch\lib\site-packages\transformers\tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1056 except OSError:
1057 raise OSError(
-> 1058 "Unable to load vocabulary from file. "
1059 "Please check that the provided vocabulary is accessible and not corrupted."
1060 )
OSError: Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted.
----------------------------------------
I'm working on a windows 10 environment
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4485/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4484 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4484/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4484/comments | https://api.github.com/repos/huggingface/transformers/issues/4484/events | https://github.com/huggingface/transformers/issues/4484 | 621,700,118 | MDU6SXNzdWU2MjE3MDAxMTg= | 4,484 | Bug using Roberta models in QA Transformers pipeline. | {
"login": "thiagomoeng",
"id": 64150563,
"node_id": "MDQ6VXNlcjY0MTUwNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/64150563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thiagomoeng",
"html_url": "https://github.com/thiagomoeng",
"followers_url": "https://api.github.com/users/thiagomoeng/followers",
"following_url": "https://api.github.com/users/thiagomoeng/following{/other_user}",
"gists_url": "https://api.github.com/users/thiagomoeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thiagomoeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thiagomoeng/subscriptions",
"organizations_url": "https://api.github.com/users/thiagomoeng/orgs",
"repos_url": "https://api.github.com/users/thiagomoeng/repos",
"events_url": "https://api.github.com/users/thiagomoeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/thiagomoeng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi, could you please post a code sample and a textual error, rather than an image? Thanks.",
"@LysandreJik yes, its very simple my code I just trying to run a transformers example.\r\n\r\n if __name__ == '__main__':\r\n import ipywidgets as widgets\r\n from transformers.pipelines import pipeline\r\n from transformers.modeling_auto import AutoModelForQuestionAnswering\r\n from transformers.tokenization_auto import AutoTokenizer\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(\"deepset/roberta-base-squad2\")\r\n model = AutoModelForQuestionAnswering.from_pretrained(\"deepset/roberta-base-squad2\")\r\n nlp_qa = pipeline('question-answering', model=model, tokenizer=tokenizer, device = 0)\r\n X = nlp_qa(context=\"text document.txt\", question='What is this project?')\r\n print(X)\r\n\r\nAnd runing with this albert or any another albert I got this error:\r\n\r\n File \"c:/Users/tioga/Desktop/Tranformers/transformers_test.py\", line 44, in <module>\r\n X = nlp_qa(context=st, question='What is this project?')\r\n File \"C:\\Python\\lib\\site-packages\\transformers\\pipelines.py\", line 1042, in __call__\r\n for s, e, score in zip(starts, ends, scores)\r\n File \"C:\\Python\\lib\\site-packages\\transformers\\pipelines.py\", line 1042, in <listcomp>\r\n for s, e, score in zip(starts, ends, scores)\r\n KeyError: 0\r\n",
"I can reproduce this error, but it is working with other models for me. Pinging @tholor who might know what is going on.",
"hi guys, anyone managed to understand what the above issue is? I am facing the same issue.\r\nThanks.",
"I believe this was fixed in #4049, which is available in the latest release `v2.10.0`. What are your installed `transformers` versions?",
"@LysandreJik I was using 2.7.0, but I still get the same error using 2.10.0",
"Using the exact code sample mentioned above? Are you using different code?",
"I have the exact issue with one of my Roberta models.. But I tried exact code now\r\n<img width=\"827\" alt=\"Screen Shot 2020-05-26 at 6 38 24 AM\" src=\"https://user-images.githubusercontent.com/3698879/82908070-50b28980-9f1c-11ea-8a12-ff70f862c46b.png\">\r\n",
"It's hard for me to test if you give an image. Can you paste the code? If you already have `transformers==2.7.0` installed, your `!pip install transformers==2.10.0` won't work. You need to add the `--upgrade` or `-U` flag.\r\n\r\nCan you add\r\n\r\n```py\r\nfrom transformers import __version__\r\nprint(__version__)\r\n```\r\n\r\njust to make sure?",
"@LysandreJik works for me. Thank you.",
"Glad I could help!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi,\r\n\r\nWhen I use transformers 2.4.0, it is working without the error, but with 3.0.2 I will get the same error!\r\n\r\nSo when the context has the answer in it, everything is fine, when it has not, I get the same error.\r\nExample:\r\n\r\n```\r\nfrom transformers.pipelines import pipeline\r\nname=\"ktrapeznikov/albert-xlarge-v2-squad-v2\"\r\nnlp=pipeline('question-answering',model=name,tokenizer=name,device=-1)\r\n```\r\n\r\nThis example won't cause any errors and I get the right answer:\r\n\r\n```\r\nqa_input = {'question': 'Is the company listed on any stock exchange?', 'context': 'Roche Corporate Executive Committee on 31 December 2019. We are dedicated to long-term success. Roche is listed on New York stock exchange.'}\r\nqa_response = nlp(qa_input)\r\n```\r\n\r\nThis will cause the error:\r\n\r\n```\r\nqa_input = {'question': 'Is the company listed on any stock exchange?', 'context': 'Roche Corporate Executive Committee on 31 December 2019. We are dedicated to long-term success.'}\r\nqa_response = nlp(qa_input)\r\n```\r\n\r\nCan you verify that it is not working with 3.0.2 ?\r\nDo you have any solutions or I should just use older versions for now to work with?\r\n\r\nThanks!\r\n"
] | 1,589 | 1,618 | 1,596 | NONE | null | # 🐛 Bug
Hello, I cant use any roberta model with pipeline('question-answering'), someone can help me in how to fix this issue?
OBS=This error appears just when I use Roberta models.
ERROR:

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4484/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4483 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4483/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4483/comments | https://api.github.com/repos/huggingface/transformers/issues/4483/events | https://github.com/huggingface/transformers/issues/4483 | 621,683,208 | MDU6SXNzdWU2MjE2ODMyMDg= | 4,483 | Trying to add support for GPT2 as decoder in EncoderDecoder model | {
"login": "dimi1357",
"id": 22443447,
"node_id": "MDQ6VXNlcjIyNDQzNDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/22443447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dimi1357",
"html_url": "https://github.com/dimi1357",
"followers_url": "https://api.github.com/users/dimi1357/followers",
"following_url": "https://api.github.com/users/dimi1357/following{/other_user}",
"gists_url": "https://api.github.com/users/dimi1357/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dimi1357/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dimi1357/subscriptions",
"organizations_url": "https://api.github.com/users/dimi1357/orgs",
"repos_url": "https://api.github.com/users/dimi1357/repos",
"events_url": "https://api.github.com/users/dimi1357/events{/privacy}",
"received_events_url": "https://api.github.com/users/dimi1357/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843738573,
"node_id": "MDU6TGFiZWwxODQzNzM4NTcz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Encoder-Decoder",
"name": "Core: Encoder-Decoder",
"color": "ef536d",
"default": false,
"description": ""
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | open | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@dimi1357 out of curiosity, what does training this look like?",
"> @dimi1357 out of curiosity, what does training this look like?\r\n\r\nThis is my training loop:\r\n```python\r\nx, encoder_attention_mask, y, decoder_attention_mask, _ = batch\r\nx = x.to(self.device)\r\ny = y.to(self.device)\r\nencoder_attention_mask = encoder_attention_mask.to(self.device)\r\ndecoder_attention_mask = decoder_attention_mask.to(self.device)\r\nmodel_kwargs = {\r\n \"attention_mask\": encoder_attention_mask,\r\n \"decoder_attention_mask\": decoder_attention_mask,\r\n \"lm_labels\": y\r\n}\r\nself.optimizer.zero_grad()\r\noutputs = self.model(input_ids=x, decoder_input_ids=y, **model_kwargs)\r\nloss = outputs[0]\r\nloss.backward()\r\nself.optimizer.step()\r\nif self.scheduler is not None:\r\n self.scheduler.step()\r\n```\r\n\r\nand I create the model this way:\r\n```pyhon\r\nconfig_decoder = AutoConfig.from_pretrained(decoder_model_name, is_decoder=True)\r\nconfig_encoder = AutoConfig.from_pretrained(encoder_model_name, is_decoder=False)\r\nconfig = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)\r\nres_model = EncoderDecoderModel(config=config)\r\n```",
"@dimi1357 Did you finally make it work? Can you provide me the \"full changes\" in some way? I am also interested in using the GPT2 model as decoder.",
"Thanks for the Feature request and the in-detail code! I will think a bit more about how to implement this and get back to you!",
"> Thanks for the Feature request and the in-detail code! I will think a bit more about how to implement this and get back to you!\r\n\r\nI forgot to add the change I've made to `Block` class forward function (I've also edited the issue):\r\n```python\r\n def forward(self, x, layer_past=None, attention_mask=None, head_mask=None, use_cache=False, encoder_hidden_states=None,\r\n encoder_attention_mask=None):\r\n output_attn = self.attn(\r\n self.ln_1(x),\r\n layer_past=layer_past,\r\n attention_mask=attention_mask,\r\n head_mask=head_mask,\r\n use_cache=use_cache,\r\n )\r\n a = output_attn[0] # output_attn: a, present, (attentions)\r\n outputs = []\r\n if self.is_decoder and encoder_hidden_states is not None:\r\n cross_attention_outputs = self.crossattention(\r\n a, layer_past, attention_mask, head_mask, encoder_hidden_states=encoder_hidden_states,\r\n encoder_attention_mask=encoder_attention_mask\r\n )\r\n a = cross_attention_outputs[0]\r\n outputs = outputs + cross_attention_outputs[1:] # add cross attentions if we output attention weights\r\n\r\n x = x + a\r\n m = self.mlp(self.ln_2(x))\r\n x = x + m\r\n\r\n outputs = [x] + output_attn[1:] + outputs\r\n\r\n return outputs # x, present, (attentions)\r\n```",
"> @dimi1357 Did you finally make it work? Can you provide me the \"full changes\" in some way? I am also interested in using the GPT2 model as decoder.\r\n\r\nYou can add the code above to where you've installed the transformers package, but I'm still not sure that this implementation is correct, so I suggest you wait for an update from huggingface team if this is okay.",
"Hey @dimi1357 . So I think the Encoder Decoder roadmap is as follows: \r\n- In ~2 weeks, we will open-source a clean notebook showing how a `Bert2Bert` model can be fine-tuned\r\n- After that, we will take a deeper look into hooking `GPT2` into the `EncoderDecoder` framework. \r\n\r\nI will keep your code sample here in mind for this :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> Hey @dimi1357 . So I think the Encoder Decoder roadmap is as follows:\r\n> \r\n> * In ~2 weeks, we will open-source a clean notebook showing how a `Bert2Bert` model can be fine-tuned\r\n> * After that, we will take a deeper look into hooking `GPT2` into the `EncoderDecoder` framework.\r\n> \r\n> I will keep your code sample here in mind for this :-)\r\n\r\nHi, \r\nIs there any updates regarding to BERT2GPT implementation.\r\nThanks!",
"Hey, I will take a look at BERTGPT2 encoder-decoder probably on Monday next week",
"@patrickvonplaten Can you please share a work in progress notebook/colab, or some code. I am willing to help with tests and datasets, in order to improve the BERT2GPT2 model. Thank you :D",
"Will finish the PR tomorrow then it should be pretty easy to do BERT2GPT2.",
"Hi @patrickvonplaten . I've used your latest commit to train BERT2GPT2 using your BERT2BERT training tutorial. It was straight forward, I only had to replace the \"bert\" from decoder with \"gpt2\". The training worked, but at inference time there was a code error in `prepare_inputs_for_generation` at line 299:\r\n> /transformers/modeling_encoder_decoder.py\r\n> 297 # first step\r\n> 298 if type(past) is tuple:\r\n> 299 encoder_outputs, _ = past <----\r\n> 300 else:\r\n> 301 encoder_outputs = (past,)\r\n> \r\n\r\n> \r\n\r\n> ValueError: too many values to unpack (expected 2)\r\n\r\nI do not know if the model requires a different evaluation approach. ",
"> Will finish the PR tomorrow then it should be pretty easy to do BERT2GPT2.\r\n\r\nThanks for the implementation, I'm going to test it now.",
"GPT2 is added and results on summariation look promising. Check out this model (Bert2GPT2 trained on CNN/Daily Mail) including train and eval script: https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16 .",
"Hi @patrickvonplaten, I used this model card to train on my custom dataset, but again the TypeError is been thrownback that `forward() got an unexpected keyword argument 'encoder_hidden_states'`\r\nhere is my code\r\n```\r\nimport nlp\r\nimport logging\r\nfrom transformers import BertTokenizer, GPT2Tokenizer, EncoderDecoderModel, Trainer, TrainingArguments\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\n\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained(\"bert-base-uncased\", \"gpt2\")\r\n# cache is currently not supported by EncoderDecoder framework\r\nmodel.decoder.config.use_cache = False\r\nbert_tokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\n# CLS token will work as BOS token\r\nbert_tokenizer.bos_token = bert_tokenizer.cls_token\r\n\r\n# SEP token will work as EOS token\r\nbert_tokenizer.eos_token = bert_tokenizer.sep_token\r\n\r\n\r\n# make sure GPT2 appends EOS in begin and end\r\ndef build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):\r\n outputs = [self.bos_token_id] + token_ids_0 + [self.eos_token_id]\r\n return outputs\r\n\r\n\r\nGPT2Tokenizer.build_inputs_with_special_tokens = build_inputs_with_special_tokens\r\ngpt2_tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n# set pad_token_id to unk_token_id -> be careful here as unk_token_id == eos_token_id == bos_token_id\r\ngpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token\r\n\r\n\r\n# set decoding params\r\nmodel.config.decoder_start_token_id = gpt2_tokenizer.bos_token_id\r\nmodel.config.eos_token_id = gpt2_tokenizer.eos_token_id\r\nmodel.config.max_length = 142\r\nmodel.config.min_length = 56\r\nmodel.config.no_repeat_ngram_size = 3\r\nmodel.early_stopping = True\r\nmodel.length_penalty = 2.0\r\nmodel.num_beams = 4\r\n\r\n# load train and validation data\r\ntrain_dataset = nlp.load_dataset('csv', data_files='data.csv',split='train[:80%]')\r\nval_dataset = nlp.load_dataset('csv', data_files='data.csv',split='train[80%:]')\r\n\r\n# load rouge for validation\r\nrouge = nlp.load_metric(\"rouge\", experiment_id=1)\r\n\r\nencoder_length = 512\r\ndecoder_length = 128\r\nbatch_size = 16\r\n\r\n\r\n# map data correctly\r\ndef map_to_encoder_decoder_inputs(batch): # Tokenizer will automatically set [BOS] <text> [EOS] \r\n # use bert tokenizer here for encoder\r\n inputs = bert_tokenizer.encode_plus(batch[\"Patient\"], padding=\"max_length\", truncation=True, max_length=encoder_length)\r\n # force summarization <= 128\r\n outputs = gpt2_tokenizer.encode_plus(batch[\"Doctor\"], padding=\"max_length\", truncation=True, max_length=decoder_length)\r\n\r\n batch[\"input_ids\"] = inputs.input_ids\r\n batch[\"attention_mask\"] = inputs.attention_mask\r\n batch[\"decoder_input_ids\"] = outputs.input_ids\r\n batch[\"labels\"] = outputs.input_ids.copy()\r\n batch[\"decoder_attention_mask\"] = outputs.attention_mask\r\n\r\n # complicated list comprehension here because pad_token_id alone is not good enough to know whether label should be excluded or not\r\n batch[\"labels\"] = [\r\n [-100 if mask == 0 else token for mask, token in mask_and_tokens] for mask_and_tokens in [zip(masks, labels) for masks, labels in zip(batch[\"decoder_attention_mask\"], batch[\"labels\"])]\r\n ]\r\n\r\n assert all([len(x) == encoder_length for x in inputs.input_ids])\r\n assert all([len(x) == decoder_length for x in outputs.input_ids])\r\n\r\n return batch\r\n\r\n\r\ndef compute_metrics(pred):\r\n labels_ids = pred.label_ids\r\n pred_ids = pred.predictions\r\n\r\n # all unnecessary tokens are removed\r\n pred_str = gpt2_tokenizer.batch_decode(pred_ids, skip_special_tokens=True)\r\n labels_ids[labels_ids == -100] = gpt2_tokenizer.eos_token_id\r\n label_str = gpt2_tokenizer.batch_decode(labels_ids, skip_special_tokens=True)\r\n\r\n rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=[\"rouge2\"])[\"rouge2\"].mid\r\n\r\n return {\r\n \"rouge2_precision\": round(rouge_output.precision, 4),\r\n \"rouge2_recall\": round(rouge_output.recall, 4),\r\n \"rouge2_fmeasure\": round(rouge_output.fmeasure, 4),\r\n }\r\n\r\n\r\n# make train dataset ready\r\ntrain_dataset = train_dataset.map(\r\n map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=[\"Patient\", \"Doctor\"],\r\n)\r\ntrain_dataset.set_format(\r\n type=\"torch\", columns=[\"input_ids\", \"attention_mask\", \"decoder_input_ids\", \"decoder_attention_mask\", \"labels\"],\r\n)\r\n\r\n# same for validation dataset\r\nval_dataset = val_dataset.map(\r\n map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=[\"Patient\", \"Doctor\"],\r\n)\r\nval_dataset.set_format(\r\n type=\"torch\", columns=[\"input_ids\", \"attention_mask\", \"decoder_input_ids\", \"decoder_attention_mask\", \"labels\"],\r\n)\r\n\r\n# set training arguments - these params are not really tuned, feel free to change\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./ambi\",\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n evaluate_during_training=True,\r\n do_train=True,\r\n do_eval=True,\r\n logging_steps=1000,\r\n save_steps=1000,\r\n eval_steps=1000,\r\n overwrite_output_dir=True,\r\n warmup_steps=2000,\r\n save_total_limit=10,\r\n fp16=True,\r\n)\r\n\r\n# instantiate trainer\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n compute_metrics=compute_metrics,\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset,\r\n)\r\n\r\n# start training\r\ntrainer.train()\r\n```\r\nIf you can see it carefully you can find that an argument is missing in `TrainingArguments` module, I always get an error that why `predict_from_generate` is passed, I tried finding that attribute in [`training_args.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py), but it seems there is no such attribute available in it. Please clarify which version are you using, If it is above 2.11 then please clarify why my the above code is throwing this error.",
"You need to switch to this branch: https://github.com/huggingface/transformers/tree/more_general_trainer_metric to make the training work. I am trying to integrate this branch into master soon :-) ",
"Thanks for letting me know.",
"Sorry to ask a question after a long period of time :-). I am still not very clear about the effect of **encoder attention mask** in GPT2. \r\n\r\nI understand that it is used only in the decoder of Encoder-Decoder model to make some change to the cross attention weights. Also, I notice the operation defined in the modelling_gpt2.py:\r\n`attention_mask = encoder_attention_mask`\r\n`...`\r\n`w=w+attention_mask`\r\n\r\nHowever, I am confused why we need this **encoder attention mask**. Is that also because the decoder can not see the whole sequence?\r\n\r\nThanks for help :-)\r\n",
"> Hi @patrickvonplaten, I used this model card to train on my custom dataset, but again the TypeError is been thrownback that `forward() got an unexpected keyword argument 'encoder_hidden_states'` here is my code\r\n> \r\n> ```\r\n> import nlp\r\n> import logging\r\n> from transformers import BertTokenizer, GPT2Tokenizer, EncoderDecoderModel, Trainer, TrainingArguments\r\n> \r\n> logging.basicConfig(level=logging.INFO)\r\n> \r\n> model = EncoderDecoderModel.from_encoder_decoder_pretrained(\"bert-base-uncased\", \"gpt2\")\r\n> # cache is currently not supported by EncoderDecoder framework\r\n> model.decoder.config.use_cache = False\r\n> bert_tokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n> \r\n> # CLS token will work as BOS token\r\n> bert_tokenizer.bos_token = bert_tokenizer.cls_token\r\n> \r\n> # SEP token will work as EOS token\r\n> bert_tokenizer.eos_token = bert_tokenizer.sep_token\r\n> \r\n> \r\n> # make sure GPT2 appends EOS in begin and end\r\n> def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):\r\n> outputs = [self.bos_token_id] + token_ids_0 + [self.eos_token_id]\r\n> return outputs\r\n> \r\n> \r\n> GPT2Tokenizer.build_inputs_with_special_tokens = build_inputs_with_special_tokens\r\n> gpt2_tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n> # set pad_token_id to unk_token_id -> be careful here as unk_token_id == eos_token_id == bos_token_id\r\n> gpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token\r\n> \r\n> \r\n> # set decoding params\r\n> model.config.decoder_start_token_id = gpt2_tokenizer.bos_token_id\r\n> model.config.eos_token_id = gpt2_tokenizer.eos_token_id\r\n> model.config.max_length = 142\r\n> model.config.min_length = 56\r\n> model.config.no_repeat_ngram_size = 3\r\n> model.early_stopping = True\r\n> model.length_penalty = 2.0\r\n> model.num_beams = 4\r\n> \r\n> # load train and validation data\r\n> train_dataset = nlp.load_dataset('csv', data_files='data.csv',split='train[:80%]')\r\n> val_dataset = nlp.load_dataset('csv', data_files='data.csv',split='train[80%:]')\r\n> \r\n> # load rouge for validation\r\n> rouge = nlp.load_metric(\"rouge\", experiment_id=1)\r\n> \r\n> encoder_length = 512\r\n> decoder_length = 128\r\n> batch_size = 16\r\n> \r\n> \r\n> # map data correctly\r\n> def map_to_encoder_decoder_inputs(batch): # Tokenizer will automatically set [BOS] <text> [EOS] \r\n> # use bert tokenizer here for encoder\r\n> inputs = bert_tokenizer.encode_plus(batch[\"Patient\"], padding=\"max_length\", truncation=True, max_length=encoder_length)\r\n> # force summarization <= 128\r\n> outputs = gpt2_tokenizer.encode_plus(batch[\"Doctor\"], padding=\"max_length\", truncation=True, max_length=decoder_length)\r\n> \r\n> batch[\"input_ids\"] = inputs.input_ids\r\n> batch[\"attention_mask\"] = inputs.attention_mask\r\n> batch[\"decoder_input_ids\"] = outputs.input_ids\r\n> batch[\"labels\"] = outputs.input_ids.copy()\r\n> batch[\"decoder_attention_mask\"] = outputs.attention_mask\r\n> \r\n> # complicated list comprehension here because pad_token_id alone is not good enough to know whether label should be excluded or not\r\n> batch[\"labels\"] = [\r\n> [-100 if mask == 0 else token for mask, token in mask_and_tokens] for mask_and_tokens in [zip(masks, labels) for masks, labels in zip(batch[\"decoder_attention_mask\"], batch[\"labels\"])]\r\n> ]\r\n> \r\n> assert all([len(x) == encoder_length for x in inputs.input_ids])\r\n> assert all([len(x) == decoder_length for x in outputs.input_ids])\r\n> \r\n> return batch\r\n> \r\n> \r\n> def compute_metrics(pred):\r\n> labels_ids = pred.label_ids\r\n> pred_ids = pred.predictions\r\n> \r\n> # all unnecessary tokens are removed\r\n> pred_str = gpt2_tokenizer.batch_decode(pred_ids, skip_special_tokens=True)\r\n> labels_ids[labels_ids == -100] = gpt2_tokenizer.eos_token_id\r\n> label_str = gpt2_tokenizer.batch_decode(labels_ids, skip_special_tokens=True)\r\n> \r\n> rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=[\"rouge2\"])[\"rouge2\"].mid\r\n> \r\n> return {\r\n> \"rouge2_precision\": round(rouge_output.precision, 4),\r\n> \"rouge2_recall\": round(rouge_output.recall, 4),\r\n> \"rouge2_fmeasure\": round(rouge_output.fmeasure, 4),\r\n> }\r\n> \r\n> \r\n> # make train dataset ready\r\n> train_dataset = train_dataset.map(\r\n> map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=[\"Patient\", \"Doctor\"],\r\n> )\r\n> train_dataset.set_format(\r\n> type=\"torch\", columns=[\"input_ids\", \"attention_mask\", \"decoder_input_ids\", \"decoder_attention_mask\", \"labels\"],\r\n> )\r\n> \r\n> # same for validation dataset\r\n> val_dataset = val_dataset.map(\r\n> map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=[\"Patient\", \"Doctor\"],\r\n> )\r\n> val_dataset.set_format(\r\n> type=\"torch\", columns=[\"input_ids\", \"attention_mask\", \"decoder_input_ids\", \"decoder_attention_mask\", \"labels\"],\r\n> )\r\n> \r\n> # set training arguments - these params are not really tuned, feel free to change\r\n> training_args = TrainingArguments(\r\n> output_dir=\"./ambi\",\r\n> per_device_train_batch_size=batch_size,\r\n> per_device_eval_batch_size=batch_size,\r\n> evaluate_during_training=True,\r\n> do_train=True,\r\n> do_eval=True,\r\n> logging_steps=1000,\r\n> save_steps=1000,\r\n> eval_steps=1000,\r\n> overwrite_output_dir=True,\r\n> warmup_steps=2000,\r\n> save_total_limit=10,\r\n> fp16=True,\r\n> )\r\n> \r\n> # instantiate trainer\r\n> trainer = Trainer(\r\n> model=model,\r\n> args=training_args,\r\n> compute_metrics=compute_metrics,\r\n> train_dataset=train_dataset,\r\n> eval_dataset=val_dataset,\r\n> )\r\n> \r\n> # start training\r\n> trainer.train()\r\n> ```\r\n> \r\n> If you can see it carefully you can find that an argument is missing in `TrainingArguments` module, I always get an error that why `predict_from_generate` is passed, I tried finding that attribute in [`training_args.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py), but it seems there is no such attribute available in it. Please clarify which version are you using, If it is above 2.11 then please clarify why my the above code is throwing this error.\r\n\r\n@AmbiTyga @patrickvonplaten Is this error fixed? I have switched to the branch \"more_general_trainer_metric.\" But it seems this error still exists when I am running codes in https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16. ",
"The code is a bit outdated there. You should be able to simply use the https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization example. In order to create a BERT2GPT2 checkpoint, you could a code that is similar to this one: https://huggingface.co/docs/transformers/v4.17.0/en/model_doc/encoder-decoder#transformers.EncoderDecoderModel.forward\r\n\r\n(just replace one BERT by GPT2)\r\n\r\nSo to summarize,\r\n\r\n1. Create a warm-started bert-gpt2 checkpoint\r\n2. save checkpoint\r\n3. use summarization example to fine-tune the checkpoint\r\n\r\nI'll keep this issue open for now since we should probably create a nice \"How-to\" guide for this",
"> The code is a bit outdated there. You should be able to simply use the https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization example. In order to create a BERT2GPT2 checkpoint, you could a code that is similar to this one: https://huggingface.co/docs/transformers/v4.17.0/en/model_doc/encoder-decoder#transformers.EncoderDecoderModel.forward\r\n> \r\n> (just replace one BERT by GPT2)\r\n> \r\n> So to summarize,\r\n> \r\n> 1. Create a warm-started bert-gpt2 checkpoint\r\n> 2. save checkpoint\r\n> 3. use summarization example to fine-tune the checkpoint\r\n> \r\n> I'll keep this issue open for now since we should probably create a nice \"How-to\" guide for this\r\n\r\nThanks for your guidance! I try this method to create and ft a bert2gpt2 model, but it seems that \"tokenizer\" would be a problem: I can't load a single suitable tokenizer for this model in the summarization example. So is it necessary for me to defined tokenizer1 for bert and tokenizer2 for gpt2 and then change any code that is related to \"tokenizer\" in order to fix this problem? @patrickvonplaten ",
"It's fine to load two tokenizers no? ",
"> \r\n\r\nYeah,I use 2 tokenizers to replace \"tokenizer\" in run_summarization.py and also do some other changes, the code can work now(although I don't know whether it is right....). Here are my changes.\r\n\r\n1. change the resize_token_embeddings method`#model.resize_token_embeddings(len(tokenizer))`\r\n `model.encoder.resize_token_embeddings(len(tokenizer1))`\r\n `model.decoder.resize_token_embeddings(len(tokenizer2))`\r\n2. some special tokens settings according to [https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16](url)\r\n3. facing problem like https://github.com/huggingface/transformers/issues/10646#issue-829065330, and used codes in [https://github.com/huggingface/transformers/blob/24e2fa1590faac894da3422daf56abf9770c9d81/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L555](url) line554-555 and line147-162\r\n4. Noticing that in bert base/large \"max_position_embeddings\" is 512, and default max_source_length in run_summarization.py is 1024, as a result if our input sequence length is over 512, we will get an error like https://github.com/huggingface/transformers/issues/15081#issue-1097193504. So let max_source_length=512.\r\n5. all codes segmentations of (tokenizer->tokenizer2) in run_summarization.py(**Not sure**)\r\n```\r\n # Setup the tokenizer for targets\r\n with tokenizer2.as_target_tokenizer():\r\n labels = tokenizer2(targets, max_length=max_target_length, padding=padding, truncation=True)\r\n\r\n # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore\r\n # padding in the loss.\r\n if padding == \"max_length\" and data_args.ignore_pad_token_for_loss:\r\n labels[\"input_ids\"] = [\r\n [(l if l != tokenizer2.pad_token_id else -100) for l in label] for label in labels[\"input_ids\"]\r\n ]\r\n\r\n model_inputs[\"labels\"] = labels[\"input_ids\"]\r\n return model_inputs\r\n```\r\n\r\n\r\n```\r\n def compute_metrics(eval_preds):\r\n preds, labels = eval_preds\r\n if isinstance(preds, tuple):\r\n preds = preds[0]\r\n decoded_preds = tokenizer2.batch_decode(preds, skip_special_tokens=True)\r\n if data_args.ignore_pad_token_for_loss:\r\n # Replace -100 in the labels as we can't decode them.\r\n labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\r\n decoded_labels = tokenizer2.batch_decode(labels, skip_special_tokens=True)\r\n\r\n```\r\n\r\n\r\n\r\n```\r\n if trainer.is_world_process_zero():\r\n if training_args.predict_with_generate:\r\n predictions = tokenizer2.batch_decode(\r\n predict_results.predictions, skip_special_tokens=True, clean_up_tokenization_spaces=True\r\n )\r\n predictions = [pred.strip() for pred in predictions]\r\n output_prediction_file = os.path.join(training_args.output_dir, \"generated_predictions.txt\")\r\n with open(output_prediction_file, \"w\") as writer:\r\n writer.write(\"\\n\".join(predictions))\r\n\r\n```\r\n> It's fine to load two tokenizers no?\r\n\r\n",
"Hey everyone,\r\nDid this work go anywhere?\r\nI need a pre-trained gpt2 model based on nn.Linear instead of Conv1D layers for research purpose, Is the implementation above merged anywhere, or there exist some other gpt2 model based on nn.Linear?",
"Can I work on this issue as a good first issue or is there no point?",
"I don't think there is any point @Forpee ",
"> For a generation problem, it is usually better to use GPT2 as the decoder, over BERT.\r\n\r\nWhy should this be the case, if you have enough data to train the new cross-attention parameters?\r\n\r\nThe paper for the encoderDecoderModel reports for the summarization task: \r\n\r\n"
] | 1,589 | 1,707 | null | NONE | null | # 🚀 Feature request
Hi,
I am trying to add the option of using GPT2 as the decoder in the EncoderDecoder model, which only support
## Motivation
For a generation problem, it usually better to use GPT2 as the decoder, over BERT.
## Your contribution
I've made the following changes in `modeling_gpt2.py` file:
- Added crossattention layer if the model is a decoder, to the `Block` class:
```python
class Block(nn.Module):
def __init__(self, n_ctx, config, scale=False):
super().__init__()
nx = config.n_embd
self.ln_1 = nn.LayerNorm(nx, eps=config.layer_norm_epsilon)
self.attn = Attention(nx, n_ctx, config, scale)
self.ln_2 = nn.LayerNorm(nx, eps=config.layer_norm_epsilon)
self.mlp = MLP(4 * nx, config)
self.is_decoder = config.is_decoder
if self.is_decoder:
self.crossattention = Attention(nx, n_ctx, config, scale)
...
def forward(self, x, layer_past=None, attention_mask=None, head_mask=None, use_cache=False, encoder_hidden_states=None,
encoder_attention_mask=None):
output_attn = self.attn(
self.ln_1(x),
layer_past=layer_past,
attention_mask=attention_mask,
head_mask=head_mask,
use_cache=use_cache,
)
a = output_attn[0] # output_attn: a, present, (attentions)
outputs = []
if self.is_decoder and encoder_hidden_states is not None:
cross_attention_outputs = self.crossattention(
a, layer_past, attention_mask, head_mask, encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask
)
a = cross_attention_outputs[0]
outputs = outputs + cross_attention_outputs[1:] # add cross attentions if we output attention weights
x = x + a
m = self.mlp(self.ln_2(x))
x = x + m
outputs = [x] + output_attn[1:] + outputs
return outputs # x, present, (attentions)
```
- Added 3 Linear layers instead of the Conv1d layer:
```python
class Attention(nn.Module):
def __init__(self, nx, n_ctx, config, scale=False):
...
# self.c_attn = Conv1D(n_state * 3, nx)
self.query = nn.Linear(n_state, nx)
self.key = nn.Linear(n_state, nx)
self.value = nn.Linear(n_state, nx)
...
```
- Added `encoder_attention_mask` and `encoder_hidden_states` to the forward function of the `Attention` class, and using them for the key and the value if they are provided:
```python
def forward(self, x, layer_past=None, attention_mask=None, head_mask=None, use_cache=False, encoder_hidden_states=None,
encoder_attention_mask=None):
query = self.query(x)
if encoder_hidden_states is not None:
key = self.key(encoder_hidden_states)
value = self.value(encoder_hidden_states)
attention_mask = encoder_attention_mask
else:
key = self.key(x)
value = self.value(x)
query = self.split_heads(query)
key = self.split_heads(key, k=True)
value = self.split_heads(value)
...
```
- Added the `encoder_attention_mask` and `encoder_hidden_states` arguments to the `GPT2Model` forward function, and processed `encoder_attention_mask` same as attention_mask:
```python
class GPT2Model(GPT2PreTrainedModel):
...
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
use_cache=True,
encoder_hidden_states=None,
encoder_attention_mask=None,
):
...
# Encoder attention mask. (same action as for regular attention mask)
if encoder_attention_mask is not None:
assert batch_size > 0, "batch_size has to be defined and > 0"
encoder_attention_mask = encoder_attention_mask.view(batch_size, -1)
encoder_attention_mask = encoder_attention_mask.unsqueeze(1).unsqueeze(2)
encoder_attention_mask = encoder_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
encoder_attention_mask = (1.0 - encoder_attention_mask) * -10000.0
...
for i, (block, layer_past) in enumerate(zip(self.h, past)):
if self.output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states.view(*output_shape),)
outputs = block(
hidden_states,
layer_past=layer_past,
attention_mask=attention_mask,
head_mask=head_mask[i],
use_cache=use_cache,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
)
...
```
- Added the `encoder_attention_mask` and `encoder_hidden_states` arguments to the `GPT2LMHeadModel`forward function, as well as `lm_lables` and `masked_lm_labels` for EncoderDecoder model compatibility (probably it's better to use `GPT2DoubleHeadsModel`):
```python
class GPT2LMHeadModel(GPT2PreTrainedModel):
...
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=True,
lm_labels=None,
masked_lm_labels=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
):
...
if lm_labels is not None:
if labels is not None:
raise ValueError("You cannot specify both labels and lm_labels at the same time")
labels = lm_labels
transformer_outputs = self.transformer(
input_ids,
past=past,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
)
...
```
My biggest concern is with the second bullet, and I wanted to ask you if this implementation seems right (for now it's look like I am able to train and test an EncoderDecoder with BERT2GPT architecture).
Of course that if needed, I can provide the full code to all of my changes, but all of my changes is listed above.
Most (if not all) of the code I've add is adapted from huggingface `modeling_bert.py`file, so all of the credit goes to them.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4483/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4482 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4482/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4482/comments | https://api.github.com/repos/huggingface/transformers/issues/4482/events | https://github.com/huggingface/transformers/pull/4482 | 621,677,164 | MDExOlB1bGxSZXF1ZXN0NDIwNjgxODYx | 4,482 | Create model card for RuPERTA-base-finetuned-pos | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=h1) Report\n> Merging [#4482](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efbc1c5a9d96048ab11f8d746fe51107cb91646f&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4482 +/- ##\n=======================================\n Coverage 78.03% 78.03% \n=======================================\n Files 123 123 \n Lines 20477 20477 \n=======================================\n Hits 15980 15980 \n Misses 4497 4497 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4482/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4482/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=footer). Last update [efbc1c5...82aef0f](https://codecov.io/gh/huggingface/transformers/pull/4482?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4482/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4482",
"html_url": "https://github.com/huggingface/transformers/pull/4482",
"diff_url": "https://github.com/huggingface/transformers/pull/4482.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4482.patch",
"merged_at": 1589982350000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4481 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4481/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4481/comments | https://api.github.com/repos/huggingface/transformers/issues/4481/events | https://github.com/huggingface/transformers/pull/4481 | 621,566,101 | MDExOlB1bGxSZXF1ZXN0NDIwNTkyNDAy | 4,481 | Add mecab dependency on slow tests. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=h1) Report\n> Merging [#4481](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/384f0eb2f9d42e44094dbfd0917ccf4e6ddb462a&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4481 +/- ##\n==========================================\n- Coverage 77.96% 77.88% -0.09% \n==========================================\n Files 120 120 \n Lines 20140 20140 \n==========================================\n- Hits 15703 15686 -17 \n- Misses 4437 4454 +17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4481/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=footer). Last update [384f0eb...4deb915](https://codecov.io/gh/huggingface/transformers/pull/4481?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I think 865d4d595eefc8cc9cee58fec9179bd182be0e2e might be a more \"correct\" way to fix this"
] | 1,589 | 1,590 | 1,590 | MEMBER | null | Solves the following error:
```
2020-05-19T17:01:17.3352437Z [gw0] linux -- Python 3.7.6 /home/hf/actions-r
2020-05-19T17:01:17.3354221Z @slow
2020-05-19T17:01:17.3354825Z def test_sequence_builders(self):
2020-05-19T17:01:17.3356512Z > tokenizer = self.tokenizer_class.from_pretrained("bert-base-japanese-char")
2020-05-19T17:01:17.3356685Z
2020-05-19T17:01:17.3357374Z tests/test_tokenization_bert_japanese.py:192:
2020-05-19T17:01:17.3359012Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2020-05-19T17:01:17.3360868Z .env/lib/python3.7/site-packages/transformers/tokenization_utils.py:902: in from_pretrained
2020-05-19T17:01:17.3361266Z return cls._from_pretrained(*inputs, **kwargs)
2020-05-19T17:01:17.3363161Z .env/lib/python3.7/site-packages/transformers/tokenization_utils.py:1055: in _from_pretrained
2020-05-19T17:01:17.3363615Z tokenizer = cls(*init_inputs, **init_kwargs)
2020-05-19T17:01:17.3365382Z .env/lib/python3.7/site-packages/transformers/tokenization_bert_japanese.py:139: in __init__
2020-05-19T17:01:17.3366229Z do_lower_case=do_lower_case, never_split=never_split, **(mecab_kwargs or {})
2020-05-19T17:01:17.3367669Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2020-05-19T17:01:17.3367895Z
2020-05-19T17:01:17.3369474Z self = <transformers.tokenization_bert_japanese.MecabTokenizer object at 0x7f433565b9d0>
2020-05-19T17:01:17.3371043Z do_lower_case = False, never_split = None, normalize_text = True
2020-05-19T17:01:17.3371414Z mecab_option = None
2020-05-19T17:01:17.3371564Z
2020-05-19T17:01:17.3373681Z def __init__(self, do_lower_case=False, never_split=None, normalize_text=True, mecab_option: Optional[str] = None):
2020-05-19T17:01:17.3373909Z """Constructs a MecabTokenizer.
2020-05-19T17:01:17.3374082Z
2020-05-19T17:01:17.3374357Z Args:
2020-05-19T17:01:17.3375149Z **do_lower_case**: (`optional`) boolean (default True)
2020-05-19T17:01:17.3375850Z Whether to lower case the input.
2020-05-19T17:01:17.3376666Z **never_split**: (`optional`) list of str
2020-05-19T17:01:17.3377692Z Kept for backward compatibility purposes.
2020-05-19T17:01:17.3378953Z Now implemented directly at the base class level (see :func:`PreTrainedTokenizer.tokenize`)
2020-05-19T17:01:17.3379578Z List of token not to split.
2020-05-19T17:01:17.3380559Z **normalize_text**: (`optional`) boolean (default True)
2020-05-19T17:01:17.3381677Z Whether to apply unicode normalization to text before tokenization.
2020-05-19T17:01:17.3382985Z **mecab_option**: (`optional`) string passed to `MeCab.Tagger` constructor (default "")
2020-05-19T17:01:17.3383398Z """
2020-05-19T17:01:17.3384034Z self.do_lower_case = do_lower_case
2020-05-19T17:01:17.3385088Z self.never_split = never_split if never_split is not None else []
2020-05-19T17:01:17.3385841Z self.normalize_text = normalize_text
2020-05-19T17:01:17.3386284Z
2020-05-19T17:01:17.3386881Z > import MeCab
2020-05-19T17:01:17.3388516Z E ModuleNotFoundError: No module named 'MeCab'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4481/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4481",
"html_url": "https://github.com/huggingface/transformers/pull/4481",
"diff_url": "https://github.com/huggingface/transformers/pull/4481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4481.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4480 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4480/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4480/comments | https://api.github.com/repos/huggingface/transformers/issues/4480/events | https://github.com/huggingface/transformers/pull/4480 | 621,544,731 | MDExOlB1bGxSZXF1ZXN0NDIwNTc1NjYz | 4,480 | [Reformer] Include char lm to Trainer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@patrickvonplaten Thank you your awesome work. I´m super excited about a char-lm. \r\nI´m trying to can train the \"reformer\" model, using the \"google/reformer-enwik8\".\r\n\r\nBut using this script or the run_language_modeling.py I get the error about the lack of tokenizer ( that is expected to a char only LM).\r\n`Model name 'google/reformer-enwik8' was not found in tokenizers`\r\n\r\nCan you give me some pointer how I could train it?",
"Hi bratao! Good point I will consult with our team on how to include models that don't have don't need a tokenizer! Let me get back to you in a couple of days :-) "
] | 1,589 | 1,593 | 1,593 | MEMBER | null | Trainer currently expects every model to have a tokenizer. The reformer: `google/reformer-enwik8` is a char lm which does not require a tokenizer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4480/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4480",
"html_url": "https://github.com/huggingface/transformers/pull/4480",
"diff_url": "https://github.com/huggingface/transformers/pull/4480.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4480.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4479 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4479/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4479/comments | https://api.github.com/repos/huggingface/transformers/issues/4479/events | https://github.com/huggingface/transformers/pull/4479 | 621,542,900 | MDExOlB1bGxSZXF1ZXN0NDIwNTc0MTk0 | 4,479 | [examples] fix no grad in second pruning in run_bertology | {
"login": "TobiasLee",
"id": 20009381,
"node_id": "MDQ6VXNlcjIwMDA5Mzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/20009381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TobiasLee",
"html_url": "https://github.com/TobiasLee",
"followers_url": "https://api.github.com/users/TobiasLee/followers",
"following_url": "https://api.github.com/users/TobiasLee/following{/other_user}",
"gists_url": "https://api.github.com/users/TobiasLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TobiasLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TobiasLee/subscriptions",
"organizations_url": "https://api.github.com/users/TobiasLee/orgs",
"repos_url": "https://api.github.com/users/TobiasLee/repos",
"events_url": "https://api.github.com/users/TobiasLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/TobiasLee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@18d233d`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4479 +/- ##\n=========================================\n Coverage ? 78.20% \n=========================================\n Files ? 120 \n Lines ? 20083 \n Branches ? 0 \n=========================================\n Hits ? 15705 \n Misses ? 4378 \n Partials ? 0 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `78.74% <0.00%> (ø)` | |\n| [src/transformers/configuration\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21hcmlhbi5weQ==) | `100.00% <0.00%> (ø)` | |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.23% <0.00%> (ø)` | |\n| [src/transformers/configuration\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3RyYW5zZm9feGwucHk=) | `92.45% <0.00%> (ø)` | |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <0.00%> (ø)` | |\n| [src/transformers/data/processors/xnli.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMveG5saS5weQ==) | `29.54% <0.00%> (ø)` | |\n| [src/transformers/hf\\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `61.97% <0.00%> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <0.00%> (ø)` | |\n| [src/transformers/tokenization\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZmxhdWJlcnQucHk=) | `40.42% <0.00%> (ø)` | |\n| [src/transformers/data/processors/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvX19pbml0X18ucHk=) | `100.00% <0.00%> (ø)` | |\n| ... and [110 more](https://codecov.io/gh/huggingface/transformers/pull/4479/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=footer). Last update [18d233d...b3c4f81](https://codecov.io/gh/huggingface/transformers/pull/4479?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This looks reasonable to me. Thanks for looking into it.",
"can you run \r\n\r\n```\r\npip uninstall -y isort black\r\npip install -e .[quality]\r\nmake style\r\n```\r\n?\r\n\r\nThanks!",
"Thanks for noting how to run code reformatting! ",
"Thanks!"
] | 1,589 | 1,590 | 1,590 | CONTRIBUTOR | null | the `new_head_mask` index assignment operation makes it become a non-leaf node in the following gradient computation, resulting in grad is None bug as mentioned in #3895
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4479/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4479",
"html_url": "https://github.com/huggingface/transformers/pull/4479",
"diff_url": "https://github.com/huggingface/transformers/pull/4479.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4479.patch",
"merged_at": 1590067024000
} |
https://api.github.com/repos/huggingface/transformers/issues/4478 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4478/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4478/comments | https://api.github.com/repos/huggingface/transformers/issues/4478/events | https://github.com/huggingface/transformers/issues/4478 | 621,503,460 | MDU6SXNzdWU2MjE1MDM0NjA= | 4,478 | ❓ [TPU] [Trainer] Moving model to device before setting optimizer slow the training | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have the same problem, but I'm not sure if it's really a problem. I thought the extreme speed before the fix was because it wasn't training properly, and that the slower speed now is supposed to be this way, but that's just my guess.",
"Hi, you're right @LeonieWeissweiler. Previous to that fix, the optimizer wasn't actually adjusting weights, resulting in a major speed-up (but the script in itself wasn't working).\r\n\r\n@Colanim, do you mind specifying what exactly you're training on? When training on TPU there's a lot you should take into account: batch size, sequence length, number of cores being the most important. Do you mind giving a bit of context as to what you're trying to run?\r\n\r\n We've also merged this PR https://github.com/huggingface/transformers/pull/4467 which solves quite a few issues with the TPU training. Please make sure to install from source to benefit from that commit.\r\n\r\nFrom my tests, on TPU with 8 cores (v3-8), on MNLI I reach 22 minutes/epoch with a batch size of 8, but **6 minutes/epoch** (with a 2 minute tracing, that isn't necessary for the following epochs) with a batch size of 128 (which does train with a final accuracy of 81% using `bert-base-cased`, single epoch). ",
"I'm training ELECTRA for Extractive Text Summarization.\r\n\r\nI will try to increase the batch size and see the results, thanks for the pointer.\r\n\r\nWhat bother me is that I used TFElectra and I could train the model at ~8 iterations per sec.\r\n\r\nSame model and same hyper-parameters on Pytorch and it's slower.\r\n\r\nBut I realized that in my model, since it's extractive summarization, I'm extracting the [CLS] representation of each sentence. These CLS position varies from sample to sample. Maybe that's why it's slower on pytorch-xla ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,596 | 1,596 | CONTRIBUTOR | null | # ❓ Questions & Help
On master, after applying the fix of #4450, the training on 8 TPU cores is much slower.
* Before the fix : **20 min / epoch (8 iterations / s)**
* After the fix : **1h30 / epoch (2~3 iterations / s)**
Of course the training before the fix was not working (loss was not decreasing).
But this slow is not expected : for the TF2 equivalent with the same dataset, it takes **20 min / epoch**.
---
Anyone meeting the same problem ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4478/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4478/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4477 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4477/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4477/comments | https://api.github.com/repos/huggingface/transformers/issues/4477/events | https://github.com/huggingface/transformers/pull/4477 | 621,499,676 | MDExOlB1bGxSZXF1ZXN0NDIwNTM5MDQw | 4,477 | Remove warning of deprecation | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=h1) Report\n> Merging [#4477](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efbc1c5a9d96048ab11f8d746fe51107cb91646f&el=desc) will **increase** coverage by `0.08%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4477 +/- ##\n==========================================\n+ Coverage 78.03% 78.12% +0.08% \n==========================================\n Files 123 123 \n Lines 20477 20477 \n==========================================\n+ Hits 15980 15997 +17 \n+ Misses 4497 4480 -17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/4477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `96.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `34.07% <0.00%> (+5.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=footer). Last update [efbc1c5...2e2abdc](https://codecov.io/gh/huggingface/transformers/pull/4477?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,590 | 1,590 | CONTRIBUTOR | null | Remove warning of deprecated overload of addcdiv_
Fix #4451 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4477/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4477",
"html_url": "https://github.com/huggingface/transformers/pull/4477",
"diff_url": "https://github.com/huggingface/transformers/pull/4477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4477.patch",
"merged_at": 1590007710000
} |
https://api.github.com/repos/huggingface/transformers/issues/4476 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4476/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4476/comments | https://api.github.com/repos/huggingface/transformers/issues/4476/events | https://github.com/huggingface/transformers/issues/4476 | 621,455,381 | MDU6SXNzdWU2MjE0NTUzODE= | 4,476 | Tokenizer encode to have an option to overflow from left | {
"login": "misrasaurabh1",
"id": 1271289,
"node_id": "MDQ6VXNlcjEyNzEyODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1271289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/misrasaurabh1",
"html_url": "https://github.com/misrasaurabh1",
"followers_url": "https://api.github.com/users/misrasaurabh1/followers",
"following_url": "https://api.github.com/users/misrasaurabh1/following{/other_user}",
"gists_url": "https://api.github.com/users/misrasaurabh1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/misrasaurabh1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/misrasaurabh1/subscriptions",
"organizations_url": "https://api.github.com/users/misrasaurabh1/orgs",
"repos_url": "https://api.github.com/users/misrasaurabh1/repos",
"events_url": "https://api.github.com/users/misrasaurabh1/events{/privacy}",
"received_events_url": "https://api.github.com/users/misrasaurabh1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052129,
"node_id": "MDU6TGFiZWwxODM0MDUyMTI5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature",
"name": "High-Level feature",
"color": "f7c9a3",
"default": false,
"description": ""
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@misrasaurabh1 What solution do you use now for this encoding dialog data problem?",
"I use something like\r\n`self.tokenizer.encode(input)[-self.block_size:]`\r\nThis throws a warning for length overflow so I deactivate it with logging. \r\nAlso one has to make attention masks separately as some models require this.",
"Indeed, we should add an option to truncate on the left!\r\ncc @n1t0 for our sprint of September.",
"perhaps add a truncation_side to https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer to be consistent with padding_side.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@thomwolf @n1t0 Any plan for this? I just saw this because of the bot.\r\n\r\nI think I can do this, seems like all the logic is here. \r\nhttps://github.com/huggingface/transformers/blob/07708793f20ec3a949ccab32cc4fe0c7272dcc4c/src/transformers/tokenization_utils_base.py#L2766\r\n\r\nBut how about fast 🤗 Tokenizers? Will I need to also change the rust code?\r\n\r\nAnd I noticed something that might be a bug, and can be improved:\r\n\r\nhttps://github.com/huggingface/transformers/blob/07708793f20ec3a949ccab32cc4fe0c7272dcc4c/src/transformers/tokenization_utils_base.py#L2816-L2831\r\n\r\nHere it loops `num_tokens_to_remove` times to decide how many tokens needs to be truncated for each sequence, which can be calculated without looping.\r\n\r\nAnd in case `stride` is not 0, it seems to return up to `stride`*`num_tokens_to_remove` extra tokens to `overflowing_tokens`.\r\nhttps://github.com/huggingface/transformers/blob/07708793f20ec3a949ccab32cc4fe0c7272dcc4c/src/transformers/tokenization_utils_base.py#L2801-L2803\r\nAlso it seems weird to me that `overflowing_tokens` will be mixed with tokens from `ids` and `pair_ids`. Perhaps it should be a tuple of list if `TruncationStrategy` is `longest_first`.\r\n\r\nNote to self: `overflowing_tokens` is used in squad to construct another pair if the doc is too long. `stride` is also used in squad. I can't find other use of `overflowing_tokens`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/969859d5f67c7106de4d1098c4891c9b03694bbe/src/transformers/data/processors/squad.py#L154-L216",
"One feedback about what's happening with this facility of left truncation being not available - its harder to use the datasets library and we have to do python Hackery which reduces the benefits of using the datasets library in the first place.",
"I recently needed to do exactly this, but ran into this issue so I had to manually truncate the text. Simply doing `encoded_tensor[-max_length:]` would also truncate samples that are less than `max_length` since they are padded to the right. \r\n\r\nHere's the approach I used instead:\r\n```python\r\ndef encode_right_truncated(tokenizer, text, padding='max_length', max_length=512, add_special_tokens=True):\r\n tokenized = tokenizer.tokenize(text, padding=padding, max_length=max_length, add_special_tokens=add_special_tokens)\r\n \r\n if not add_special_tokens:\r\n truncated = tokenized[-max_length:]\r\n else:\r\n truncated = tokenized[0:1] + tokenized[-(max_length-1):]\r\n \r\n ids = tokenizer.convert_tokens_to_ids(truncated)\r\n \r\n return ids\r\n```\r\n\r\nHope this helps future people finding this from Google/DDG",
"For anyone arriving here from search, note that this is now possible by setting [`truncation_side`](https://huggingface.co/docs/transformers/main/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.truncation_side) \r\n```python\r\n# specify when initializing the tokenizer,\r\ntokenizer = AutoTokenizer.from_pretrained(..., truncation_side = \"left\")\r\n# or modify an already-initialized tokenizer, like\r\ntokenizer.truncation_side = \"right\"\r\n```\r\nsee https://github.com/huggingface/transformers/pull/12913"
] | 1,589 | 1,686 | 1,604 | CONTRIBUTOR | null | # 🚀 Feature request
Current tokenizer encode variants ( encode, batch_encode, batch_encode_plus) handle longer sequences than max_length by overflowing tokens from the right hand side and thus restricting the length to max_length. This feature request is to allow an option for the tokenizer encode methods to overflow tokens from the left hand side as well.
## Motivation
For problems dealing with dialog, if one were to train an intent classification or next sentence prediction model and the dialog was longer than max_length, one would like to throw away the tokens from the beginning of the conversation as they are less relevant than the more recent messages.
This motivates the need for a encoder that works well with dialog data where more recent tokens are more valuable.
## Your contribution
I could change the function `truncate_sequences` by adding a new truncation_strategy option that will truncate from left. But want to get feedback from the Huggingface team about this proposal.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4476/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4476/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4475 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4475/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4475/comments | https://api.github.com/repos/huggingface/transformers/issues/4475/events | https://github.com/huggingface/transformers/issues/4475 | 621,443,577 | MDU6SXNzdWU2MjE0NDM1Nzc= | 4,475 | Request for hosting model files in a Virtual Hosted-Style S3 buckets | {
"login": "rjsaito",
"id": 15206644,
"node_id": "MDQ6VXNlcjE1MjA2NjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/15206644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rjsaito",
"html_url": "https://github.com/rjsaito",
"followers_url": "https://api.github.com/users/rjsaito/followers",
"following_url": "https://api.github.com/users/rjsaito/following{/other_user}",
"gists_url": "https://api.github.com/users/rjsaito/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rjsaito/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rjsaito/subscriptions",
"organizations_url": "https://api.github.com/users/rjsaito/orgs",
"repos_url": "https://api.github.com/users/rjsaito/repos",
"events_url": "https://api.github.com/users/rjsaito/events{/privacy}",
"received_events_url": "https://api.github.com/users/rjsaito/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @rjsaito we are actually moving to serving all our files from the cloudfront powered cdn.huggingface.co",
"Awesome! Do you have a current ETA when this change would be in place?",
"It's already in place for model weights.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,596 | 1,596 | NONE | null | Is there any plans for the s3 buckets currently hosting the model files to migrate from the current "Path-Style Request" format to a "Virtual Hosted-Style Request" format?
Path-Style URLs follow the following format (s3.amazonaws.com/* OR s3.Region.amazonaws.com/*). For example, today the model config file for 'bert_uncased_L-2_H-128_A-2' is accessed via the Path-Style URL:
https://s3.amazonaws.com/models.huggingface.co/bert/google/bert_uncased_L-2_H-128_A-2/config.json
According to AWS, they will be deprecating Path-Style requests (though there will be legacy support) - but one major reason for the migration to the Virtual Hosted-Style URL (which takes the form bucket-name.s3.amazonaws.com or bucket-name.s3.Region.amazonaws.com) is for security reasons (e.g. if companies/organizations need to whitelist sites in their servers to utilize transformer models, the virtual hosted style will reduce the "blast radius" in cases of security breaches).
More details on "Path" vs "Virtual-Hosted" style requests:
https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#path-style-access | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4475/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4474 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4474/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4474/comments | https://api.github.com/repos/huggingface/transformers/issues/4474/events | https://github.com/huggingface/transformers/pull/4474 | 621,361,632 | MDExOlB1bGxSZXF1ZXN0NDIwNDI5MDM3 | 4,474 | Remove warning of deprecation | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Signature is of `addcdiv_` is different between Pytorch 1.4 and 1.5\r\n\r\nhttps://pytorch.org/docs/1.4.0/tensors.html?highlight=addcdiv#torch.Tensor.addcdiv\r\nhttps://pytorch.org/docs/stable/tensors.html?highlight=addcdiv_#torch.Tensor.addcdiv\r\n\r\nI guess as long as Pytorch 1.4 is supported by `transformers` we can just ignore the Warning given when using Pytorch 1.5",
"See #4477 for a fix that work for both PT1.4 and PT1.5"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Remove warning of deprecated overload of `addcdiv_`
Fix #4451 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4474/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4474",
"html_url": "https://github.com/huggingface/transformers/pull/4474",
"diff_url": "https://github.com/huggingface/transformers/pull/4474.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4474.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4473 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4473/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4473/comments | https://api.github.com/repos/huggingface/transformers/issues/4473/events | https://github.com/huggingface/transformers/pull/4473 | 621,351,142 | MDExOlB1bGxSZXF1ZXN0NDIwNDIwNDUw | 4,473 | Add Fine-tune DialoGPT on new datasets notebook | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/followers",
"following_url": "https://api.github.com/users/ncoop57/following{/other_user}",
"gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions",
"organizations_url": "https://api.github.com/users/ncoop57/orgs",
"repos_url": "https://api.github.com/users/ncoop57/repos",
"events_url": "https://api.github.com/users/ncoop57/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncoop57/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=h1) Report\n> Merging [#4473](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/48c3a70b4eaedab1dd9ad49990cfaa4d6cb8f6a0&el=desc) will **decrease** coverage by `0.42%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4473 +/- ##\n==========================================\n- Coverage 78.41% 77.98% -0.43% \n==========================================\n Files 123 123 \n Lines 20432 20432 \n==========================================\n- Hits 16021 15934 -87 \n- Misses 4411 4498 +87 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4473/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4473/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=footer). Last update [48c3a70...8518af3](https://codecov.io/gh/huggingface/transformers/pull/4473?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Cool notebook! And a great complement to the [model card](https://huggingface.co/ncoop57/DiGPTame-medium) \r\n\r\ncc @patrickvonplaten \r\n\r\nMaybe you can use the `Trainer` in a v2 of the notebook =)\r\n\r\nAnd you could use the [nlp](https://github.com/huggingface/nlp) library to share the dataset, cc @thomwolf ",
"Awesome! "
] | 1,589 | 1,590 | 1,590 | CONTRIBUTOR | null | Here is a tutorial notebook I created for fine-tuning the DialoGPT on a Spanish conversation dataset. It shows how to prepare a dataset that conforms to the necessary style of the original DialoGPT dataset and how to train it using a GPU provided by Google Colab. Sadly it is not using the newer Trainer that Huggingface provides, but I thought it might be useful for others trying to work with conversational AI so wanted to share.
Thanks for the amazing library and hugs to all of y'all 🤗!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4473/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4473",
"html_url": "https://github.com/huggingface/transformers/pull/4473",
"diff_url": "https://github.com/huggingface/transformers/pull/4473.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4473.patch",
"merged_at": 1590005873000
} |
https://api.github.com/repos/huggingface/transformers/issues/4472 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4472/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4472/comments | https://api.github.com/repos/huggingface/transformers/issues/4472/events | https://github.com/huggingface/transformers/pull/4472 | 621,344,766 | MDExOlB1bGxSZXF1ZXN0NDIwNDE0OTc4 | 4,472 | [gpu slow tests] fix mbart-large-enro gpu tests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=h1) Report\n> Merging [#4472](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/48c3a70b4eaedab1dd9ad49990cfaa4d6cb8f6a0&el=desc) will **decrease** coverage by `0.41%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4472 +/- ##\n==========================================\n- Coverage 78.41% 77.99% -0.42% \n==========================================\n Files 123 123 \n Lines 20432 20432 \n==========================================\n- Hits 16021 15936 -85 \n- Misses 4411 4496 +85 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=footer). Last update [48c3a70...9152273](https://codecov.io/gh/huggingface/transformers/pull/4472?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4472/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4472",
"html_url": "https://github.com/huggingface/transformers/pull/4472",
"diff_url": "https://github.com/huggingface/transformers/pull/4472.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4472.patch",
"merged_at": 1589931932000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4471 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4471/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4471/comments | https://api.github.com/repos/huggingface/transformers/issues/4471/events | https://github.com/huggingface/transformers/issues/4471 | 621,301,621 | MDU6SXNzdWU2MjEzMDE2MjE= | 4,471 | batch_encode_plus returns same lengths when enable pad_to_max_length | {
"login": "binh-vu",
"id": 4346739,
"node_id": "MDQ6VXNlcjQzNDY3Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4346739?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/binh-vu",
"html_url": "https://github.com/binh-vu",
"followers_url": "https://api.github.com/users/binh-vu/followers",
"following_url": "https://api.github.com/users/binh-vu/following{/other_user}",
"gists_url": "https://api.github.com/users/binh-vu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/binh-vu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/binh-vu/subscriptions",
"organizations_url": "https://api.github.com/users/binh-vu/orgs",
"repos_url": "https://api.github.com/users/binh-vu/repos",
"events_url": "https://api.github.com/users/binh-vu/events{/privacy}",
"received_events_url": "https://api.github.com/users/binh-vu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This is not a bug but expected behaviour. The length of the tokenized input is only calculated after padding.\r\n\r\nhttps://github.com/huggingface/transformers/blob/a08652772791fdaeed6f263b1a99926ca64be5dc/src/transformers/tokenization_utils.py#L1981-L1982\r\n\r\nPerhaps you are right, though, and it would be more useful to get the size before padding!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,596 | 1,596 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the following script
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
sents = [
"I can eat glass without harm",
"I cannot eat glass"
]
resp = tokenizer.batch_encode_plus(sents, pad_to_max_length=True, return_lengths=True)
print(resp['length'])
# >>> get [8, 8], should be [8, 6]
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The function batch_encode_plus should return correct lengths of sentences before padded to max length. Which should be [8, 6] in the above example. Otherwise, we can just get the length from the last dimension of the mask.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): CPU 1.5
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4471/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4470 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4470/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4470/comments | https://api.github.com/repos/huggingface/transformers/issues/4470/events | https://github.com/huggingface/transformers/pull/4470 | 621,239,774 | MDExOlB1bGxSZXF1ZXN0NDIwMzI4NDAw | 4,470 | Model card for Tereveni-AI/gpt2-124M-uk-fiction | {
"login": "obsh",
"id": 1974420,
"node_id": "MDQ6VXNlcjE5NzQ0MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1974420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/obsh",
"html_url": "https://github.com/obsh",
"followers_url": "https://api.github.com/users/obsh/followers",
"following_url": "https://api.github.com/users/obsh/following{/other_user}",
"gists_url": "https://api.github.com/users/obsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/obsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/obsh/subscriptions",
"organizations_url": "https://api.github.com/users/obsh/orgs",
"repos_url": "https://api.github.com/users/obsh/repos",
"events_url": "https://api.github.com/users/obsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/obsh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"by the way, could you add a \r\n\r\n```\r\n---\r\nlanguage: ukrainian\r\n---\r\n```\r\n\r\nmetadata block on top, for the model to be surfaced in search etc.?",
"Sure, I’ll add it\n\nOn Wed, 20 May 2020 at 18:05, Julien Chaumond <[email protected]>\nwrote:\n\n> by the way, could you add a\n>\n> ---\n> language: ukrainian\n> ---\n>\n> metadata block on top, for the model to be surfaced in search etc.?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/4470#issuecomment-631533257>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAPCBFE27XTTMKNB2IIQTNTRSPWTPANCNFSM4NFJPXQA>\n> .\n>\n-- \n____________________________________\nЗ повагою, Бушковський Олександр\n"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Create model card for "Tereveni-AI/gpt2-124M-uk-fiction" model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4470/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4470",
"html_url": "https://github.com/huggingface/transformers/pull/4470",
"diff_url": "https://github.com/huggingface/transformers/pull/4470.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4470.patch",
"merged_at": 1589982267000
} |
https://api.github.com/repos/huggingface/transformers/issues/4469 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4469/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4469/comments | https://api.github.com/repos/huggingface/transformers/issues/4469/events | https://github.com/huggingface/transformers/pull/4469 | 621,234,909 | MDExOlB1bGxSZXF1ZXN0NDIwMzI0NDE4 | 4,469 | Better None gradients handling in TF Trainer | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=h1) Report\n> Merging [#4469](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5856999a9f2926923f037ecd8d27b8058bcf9dae&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4469 +/- ##\n==========================================\n+ Coverage 77.98% 78.00% +0.01% \n==========================================\n Files 123 123 \n Lines 20436 20431 -5 \n==========================================\n Hits 15938 15938 \n+ Misses 4498 4493 -5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `17.92% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=footer). Last update [5856999...095c8d2](https://codecov.io/gh/huggingface/transformers/pull/4469?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,591 | 1,590 | CONTRIBUTOR | null | Update the TF Trainer to better handle None gradients in order to have something generic and not anymore task dependent. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4469/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4469",
"html_url": "https://github.com/huggingface/transformers/pull/4469",
"diff_url": "https://github.com/huggingface/transformers/pull/4469.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4469.patch",
"merged_at": 1590007582000
} |
https://api.github.com/repos/huggingface/transformers/issues/4468 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4468/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4468/comments | https://api.github.com/repos/huggingface/transformers/issues/4468/events | https://github.com/huggingface/transformers/pull/4468 | 621,216,673 | MDExOlB1bGxSZXF1ZXN0NDIwMzA5MzA0 | 4,468 | [Tests, GPU, SLOW] fix a bunch of GPU hardcoded tests in Pytorch | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=h1) Report\n> Merging [#4468](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5856999a9f2926923f037ecd8d27b8058bcf9dae&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4468 +/- ##\n==========================================\n- Coverage 77.98% 77.98% -0.01% \n==========================================\n Files 123 123 \n Lines 20436 20436 \n==========================================\n- Hits 15938 15937 -1 \n- Misses 4498 4499 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=footer). Last update [5856999...7d9fd53](https://codecov.io/gh/huggingface/transformers/pull/4468?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great!"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | in almost all tests I forgot to put the model on gpu via
`model = model.to(torch_device)` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4468/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4468",
"html_url": "https://github.com/huggingface/transformers/pull/4468",
"diff_url": "https://github.com/huggingface/transformers/pull/4468.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4468.patch",
"merged_at": 1589916905000
} |
https://api.github.com/repos/huggingface/transformers/issues/4467 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4467/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4467/comments | https://api.github.com/repos/huggingface/transformers/issues/4467/events | https://github.com/huggingface/transformers/pull/4467 | 621,214,841 | MDExOlB1bGxSZXF1ZXN0NDIwMzA3OTIx | 4,467 | TPU hangs when saving optimizer/scheduler | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=h1) Report\n> Merging [#4467](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/07dd7c2fd8996fec2979555437dfeff0d38cbf28&el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4467 +/- ##\n==========================================\n- Coverage 78.07% 77.97% -0.10% \n==========================================\n Files 123 123 \n Lines 20436 20439 +3 \n==========================================\n- Hits 15955 15937 -18 \n- Misses 4481 4502 +21 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-4.78%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=footer). Last update [07dd7c2...11186c1](https://codecov.io/gh/huggingface/transformers/pull/4467?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Solid work, as discussed 👍",
"Nice! I tried two days ago and just skipped the checkpoints but something with the eval seemed to be messing things up(ForMultipleChoice) as well. WandB created 8 runs. dont know if its useful, im still figuring out alot of the stuff."
] | 1,589 | 1,590 | 1,590 | MEMBER | null | Similarly to when saving a model state dict, the optimizer and scheduler should be saved using `xm.save` and behind an `xm.rendezvous`.
Additional fix: `pl.ParallelLoader` is not a `torch.utils.data.DataLoader`, and, therefore, must be reinitialized at each epoch. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4467/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4467/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4467",
"html_url": "https://github.com/huggingface/transformers/pull/4467",
"diff_url": "https://github.com/huggingface/transformers/pull/4467.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4467.patch",
"merged_at": 1590067108000
} |
https://api.github.com/repos/huggingface/transformers/issues/4466 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4466/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4466/comments | https://api.github.com/repos/huggingface/transformers/issues/4466/events | https://github.com/huggingface/transformers/pull/4466 | 621,161,584 | MDExOlB1bGxSZXF1ZXN0NDIwMjY0MzA4 | 4,466 | Model card for RuPERTa-base fine-tuned for NER | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=h1) Report\n> Merging [#4466](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5856999a9f2926923f037ecd8d27b8058bcf9dae&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4466 +/- ##\n==========================================\n- Coverage 77.98% 77.98% -0.01% \n==========================================\n Files 123 123 \n Lines 20436 20436 \n==========================================\n- Hits 15938 15937 -1 \n- Misses 4498 4499 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=footer). Last update [5856999...760964e](https://codecov.io/gh/huggingface/transformers/pull/4466?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"nice example"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4466/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4466",
"html_url": "https://github.com/huggingface/transformers/pull/4466",
"diff_url": "https://github.com/huggingface/transformers/pull/4466.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4466.patch",
"merged_at": 1589982325000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4465 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4465/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4465/comments | https://api.github.com/repos/huggingface/transformers/issues/4465/events | https://github.com/huggingface/transformers/pull/4465 | 621,143,910 | MDExOlB1bGxSZXF1ZXN0NDIwMjQ5NzI4 | 4,465 | [ci] Slow GPU tests run daily | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,589 | 1,590 | 1,590 | MEMBER | null | Could be useful to fix a few of the warnings and deprecation warnings too | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4465/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4465",
"html_url": "https://github.com/huggingface/transformers/pull/4465",
"diff_url": "https://github.com/huggingface/transformers/pull/4465.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4465.patch",
"merged_at": 1590442083000
} |
https://api.github.com/repos/huggingface/transformers/issues/4464 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4464/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4464/comments | https://api.github.com/repos/huggingface/transformers/issues/4464/events | https://github.com/huggingface/transformers/pull/4464 | 621,097,557 | MDExOlB1bGxSZXF1ZXN0NDIwMjEzNzkx | 4,464 | [Longformer] Docs and clean API | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=h1) Report\n> Merging [#4464](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8f1d0471489352ec01556ae61f8e8246002bbc58&el=desc) will **increase** coverage by `0.04%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4464 +/- ##\n==========================================\n+ Coverage 77.93% 77.98% +0.04% \n==========================================\n Files 123 123 \n Lines 20430 20426 -4 \n==========================================\n+ Hits 15922 15929 +7 \n+ Misses 4508 4497 -11 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `82.94% <100.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+1.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=footer). Last update [8f1d047...e53e5dc](https://codecov.io/gh/huggingface/transformers/pull/4464?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | This PR:
- adds a documentation page for Longformer. @ibeltagy - it's best to read it using this link I think: https://github.com/huggingface/transformers/pull/4464/files?short_path=3909947#diff-3909947f36862a1731195bf05c85c64c.
- fixes a typo to correctly render the pretrained models doc page
- changes the API of Longformer slightly. I removed the `attention_mode` from Longformer because I don't think it should be used. The modus should always be `Longformer` since it is a `Longformer` model. The user should not be able to create a `RobertaModel` using `LongformerModel`.
For comparisons people should use `RobertaModel` vs `LongformerModel` and not different modi of Longformer which is essentially the same as `RobertaModel` (correct me if I'm wrong here @ibeltagy). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4464/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4464",
"html_url": "https://github.com/huggingface/transformers/pull/4464",
"diff_url": "https://github.com/huggingface/transformers/pull/4464.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4464.patch",
"merged_at": 1589917957000
} |
https://api.github.com/repos/huggingface/transformers/issues/4463 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4463/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4463/comments | https://api.github.com/repos/huggingface/transformers/issues/4463/events | https://github.com/huggingface/transformers/pull/4463 | 621,045,885 | MDExOlB1bGxSZXF1ZXN0NDIwMTc0NjU1 | 4,463 | Adds predict stage for glue tasks, and generate result files which can be submitted to gluebenchmark.com | {
"login": "stdcoutzyx",
"id": 1142862,
"node_id": "MDQ6VXNlcjExNDI4NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1142862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stdcoutzyx",
"html_url": "https://github.com/stdcoutzyx",
"followers_url": "https://api.github.com/users/stdcoutzyx/followers",
"following_url": "https://api.github.com/users/stdcoutzyx/following{/other_user}",
"gists_url": "https://api.github.com/users/stdcoutzyx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stdcoutzyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stdcoutzyx/subscriptions",
"organizations_url": "https://api.github.com/users/stdcoutzyx/orgs",
"repos_url": "https://api.github.com/users/stdcoutzyx/repos",
"events_url": "https://api.github.com/users/stdcoutzyx/events{/privacy}",
"received_events_url": "https://api.github.com/users/stdcoutzyx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks good!\r\n\r\nI just improved consistency with other scripts we have (in particular, `run_ner.py`) by:\r\n- using an enum instead of two boolean flags\r\n- I also always append the actual label name in the predictions file, which removes the need for a new arg\r\n\r\nLet me know if that works for you and I'll merge to master soon",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=h1) Report\n> Merging [#4463](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8f1d0471489352ec01556ae61f8e8246002bbc58&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `46.03%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4463 +/- ##\n==========================================\n- Coverage 77.93% 77.91% -0.02% \n==========================================\n Files 123 123 \n Lines 20430 20474 +44 \n==========================================\n+ Hits 15922 15953 +31 \n- Misses 4508 4521 +13 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.62% <31.81%> (-1.41%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.15% <77.77%> (-4.05%)` | :arrow_down: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/4463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <100.00%> (+0.47%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (+1.64%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=footer). Last update [8f1d047...d172a3b](https://codecov.io/gh/huggingface/transformers/pull/4463?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks Julien for the code improvement! This look very good to me. ",
"Thank you for contributing this!"
] | 1,589 | 1,590 | 1,590 | CONTRIBUTOR | null | By simply fine-tune robterta-large with 3k steps on several tasks, achieved:
Task | Metrics | Score
-- | -- | --
Microsoft Research Paraphrase Corpus | F1 / Accuracy | 91.5/88.6
Semantic Textual Similarity Benchmark | Pearson-Spearman Corr | 90.7/90.2
Quora Question Pairs | F1 / Accuracy | 69.5/87.3
Recognizing Textual Entailment | Accuracy | 82.0
Winograd NLI | Accuracy | 65.1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4463/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4463",
"html_url": "https://github.com/huggingface/transformers/pull/4463",
"diff_url": "https://github.com/huggingface/transformers/pull/4463.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4463.patch",
"merged_at": 1590067065000
} |
https://api.github.com/repos/huggingface/transformers/issues/4462 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4462/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4462/comments | https://api.github.com/repos/huggingface/transformers/issues/4462/events | https://github.com/huggingface/transformers/pull/4462 | 621,042,093 | MDExOlB1bGxSZXF1ZXN0NDIwMTcxNjE1 | 4,462 | add T5 fine-tuning notebook [Community notebooks] | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=h1) Report\n> Merging [#4462](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8f1d0471489352ec01556ae61f8e8246002bbc58&el=desc) will **increase** coverage by `0.04%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4462 +/- ##\n==========================================\n+ Coverage 77.93% 77.98% +0.04% \n==========================================\n Files 123 123 \n Lines 20430 20430 \n==========================================\n+ Hits 15922 15932 +10 \n+ Misses 4508 4498 -10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+1.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=footer). Last update [8f1d047...ca0d2a0](https://codecov.io/gh/huggingface/transformers/pull/4462?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That's awesome. Thanks @patil-suraj! @mariamabarham @lhoestq - might it be interesting to add `emotion classification` and `swag` to `nlp`? ",
"Reworded the description a bit - hope that's ok @patil-suraj ",
"> Reworded the description a bit - hope that's ok @patil-suraj\r\n\r\n@patrickvonplaten yes, it's more clear now. Thank you!"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | @patrickvonplaten
This is the second notebook which shows how to fine-tune T5 for multiple tasks with text-to-text approach (IMDB, emotion classification, SWAG) that we discussed in issue #4426 . I didn't find emotion and SWAG dataset in the `nlp` library so I decided to keep my original dataset code to keep everything unified.
Also there's growing interest in `pytoch-lightning` so I decided to keep the `lightning` trainer. But if you think I should use HF Trainer then I can add that as well. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4462/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4462",
"html_url": "https://github.com/huggingface/transformers/pull/4462",
"diff_url": "https://github.com/huggingface/transformers/pull/4462.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4462.patch",
"merged_at": 1589905588000
} |
https://api.github.com/repos/huggingface/transformers/issues/4461 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4461/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4461/comments | https://api.github.com/repos/huggingface/transformers/issues/4461/events | https://github.com/huggingface/transformers/issues/4461 | 621,033,600 | MDU6SXNzdWU2MjEwMzM2MDA= | 4,461 | ProjectedAdaptiveLogSoftmax.log_prob raises Exception | {
"login": "gasteigerjo",
"id": 9202783,
"node_id": "MDQ6VXNlcjkyMDI3ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9202783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gasteigerjo",
"html_url": "https://github.com/gasteigerjo",
"followers_url": "https://api.github.com/users/gasteigerjo/followers",
"following_url": "https://api.github.com/users/gasteigerjo/following{/other_user}",
"gists_url": "https://api.github.com/users/gasteigerjo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gasteigerjo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gasteigerjo/subscriptions",
"organizations_url": "https://api.github.com/users/gasteigerjo/orgs",
"repos_url": "https://api.github.com/users/gasteigerjo/repos",
"events_url": "https://api.github.com/users/gasteigerjo/events{/privacy}",
"received_events_url": "https://api.github.com/users/gasteigerjo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Transformer-XL
Language I am using the model on (English, Chinese ...): WikiText-103 (English)
The problem arises when using:
* [x] the official example scripts: `run_transfo_xl.py` (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official task: WikiText-103
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Fix `run_transfo_xl.py` by removing the unused `work_dir` argument (line 52) and changing `lm_labels=target` to `labels=target` (line 108).
2. Add `logits = self.crit.log_prob(pred_hid.flatten(0, 1))` right before the model output, e.g. in line 919 of `modeling_transfo_xl.py`.
3. Run `run_transfo_xl.py`.
4. Look at error message:
```
The size of tensor a (1280) must match the size of tensor b (20000) at non-singleton dimension 1
File ".../transformers/src/transformers/modeling_transfo_xl_utilities.py", line 246, in log_prob
logprob_i = head_logprob[:, -i] + tail_logprob_i
File ".../transformers/src/transformers/modeling_transfo_xl.py", line 920, in forward
logits = self.crit.log_prob(pred_hid.flatten(0, 1))
File ".../run_transfo_xl.py", line 107, in evaluate
ret = model(data, labels=target, mems=mems)
File ".../run_transfo_xl.py", line 124, in main
test_loss = evaluate(te_iter)
File ".../run_transfo_xl.py", line 143, in <module>
main()
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
`log_prob` should return the log probabilities instead of raising an Exception.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Master (commit 384f0eb)
- Platform: Ubuntu
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (CUDA 10.1, CuDNN 7.6.3)
- Tensorflow version (GPU?): -
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4461/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4460 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4460/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4460/comments | https://api.github.com/repos/huggingface/transformers/issues/4460/events | https://github.com/huggingface/transformers/pull/4460 | 621,030,138 | MDExOlB1bGxSZXF1ZXN0NDIwMTYyMDYy | 4,460 | Attempt to do some optimizations for BERT models | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,651 | 1,595 | MEMBER | null | - Use functional as much as possible instead of creating a class instance everytime
- Precompute and store the attention scaling factor to avoid `1/sqrt(...)` every forward
- Refactor Self-Attention to group QKV weights and and increase hardware density | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4460/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4460",
"html_url": "https://github.com/huggingface/transformers/pull/4460",
"diff_url": "https://github.com/huggingface/transformers/pull/4460.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4460.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4459 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4459/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4459/comments | https://api.github.com/repos/huggingface/transformers/issues/4459/events | https://github.com/huggingface/transformers/issues/4459 | 621,027,190 | MDU6SXNzdWU2MjEwMjcxOTA= | 4,459 | Pretrained Transformer-XL gives unreasonable result on WikiText-103 | {
"login": "gasteigerjo",
"id": 9202783,
"node_id": "MDQ6VXNlcjkyMDI3ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9202783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gasteigerjo",
"html_url": "https://github.com/gasteigerjo",
"followers_url": "https://api.github.com/users/gasteigerjo/followers",
"following_url": "https://api.github.com/users/gasteigerjo/following{/other_user}",
"gists_url": "https://api.github.com/users/gasteigerjo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gasteigerjo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gasteigerjo/subscriptions",
"organizations_url": "https://api.github.com/users/gasteigerjo/orgs",
"repos_url": "https://api.github.com/users/gasteigerjo/repos",
"events_url": "https://api.github.com/users/gasteigerjo/events{/privacy}",
"received_events_url": "https://api.github.com/users/gasteigerjo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Transformer-XL (`transfo-xl-wt103`)
Language I am using the model on (English, Chinese ...): WikiText-103 (English)
The problem arises when using:
* [x] the official example scripts: `run_transfo_xl.py` (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official task: WikiText-103 (not GLUE/SQUaD)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Fix `run_transfo_xl.py` by removing the unused `work_dir` argument (line 52) and changing `lm_labels=target` to `labels=target` (line 108).
2. Run `run_transfo_xl.py`.
3. Observe the result: `test loss 10.20 | test ppl 26951.114`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
A reasonable result around the order of PPL=18.3, as reported in the paper. I know that the result will not be exactly the same, but something is definitely wrong here.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Both 2.9.1 and Master (commit 384f0eb)
- Platform: Ubuntu
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (CUDA 10.1, CuDNN 7.6.3)
- Tensorflow version (GPU?): -
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4459/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4458 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4458/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4458/comments | https://api.github.com/repos/huggingface/transformers/issues/4458/events | https://github.com/huggingface/transformers/pull/4458 | 620,979,451 | MDExOlB1bGxSZXF1ZXN0NDIwMTIxNDEw | 4,458 | layer name change to match compatibility with pytorch layer name in BertForQuestionAnswering | {
"login": "naveenjafer",
"id": 7025448,
"node_id": "MDQ6VXNlcjcwMjU0NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7025448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naveenjafer",
"html_url": "https://github.com/naveenjafer",
"followers_url": "https://api.github.com/users/naveenjafer/followers",
"following_url": "https://api.github.com/users/naveenjafer/following{/other_user}",
"gists_url": "https://api.github.com/users/naveenjafer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naveenjafer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naveenjafer/subscriptions",
"organizations_url": "https://api.github.com/users/naveenjafer/orgs",
"repos_url": "https://api.github.com/users/naveenjafer/repos",
"events_url": "https://api.github.com/users/naveenjafer/events{/privacy}",
"received_events_url": "https://api.github.com/users/naveenjafer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=h1) Report\n> Merging [#4458](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/384f0eb2f9d42e44094dbfd0917ccf4e6ddb462a&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4458 +/- ##\n==========================================\n- Coverage 77.96% 77.88% -0.09% \n==========================================\n Files 120 120 \n Lines 20140 20140 \n==========================================\n- Hits 15703 15686 -17 \n- Misses 4437 4454 +17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4458/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.82% <0.00%> (ø)` | |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4458/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4458/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4458/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=footer). Last update [384f0eb...568d3f1](https://codecov.io/gh/huggingface/transformers/pull/4458?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am not quite sure who the right person to review and include this PR would be. But commenting to open it up back nonetheless. "
] | 1,589 | 1,625 | 1,595 | NONE | null | As pointed out in #438 When using BertForQuestionAnswering to load a tensorflow model using from_pretrained, one runs into an error
`AttributeError: 'BertForQuestionAnswering' object has no attribute 'classifier'`
As pointed out in the thread it should be "qa_outputs" and not "classifier" for this functionality to work as expected.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4458/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4458",
"html_url": "https://github.com/huggingface/transformers/pull/4458",
"diff_url": "https://github.com/huggingface/transformers/pull/4458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4458.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4457 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4457/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4457/comments | https://api.github.com/repos/huggingface/transformers/issues/4457/events | https://github.com/huggingface/transformers/issues/4457 | 620,957,164 | MDU6SXNzdWU2MjA5NTcxNjQ= | 4,457 | FastTokenizer add_special_tokens also adding individual characters for multi character tokens | {
"login": "jwallat",
"id": 24674150,
"node_id": "MDQ6VXNlcjI0Njc0MTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/24674150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jwallat",
"html_url": "https://github.com/jwallat",
"followers_url": "https://api.github.com/users/jwallat/followers",
"following_url": "https://api.github.com/users/jwallat/following{/other_user}",
"gists_url": "https://api.github.com/users/jwallat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jwallat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jwallat/subscriptions",
"organizations_url": "https://api.github.com/users/jwallat/orgs",
"repos_url": "https://api.github.com/users/jwallat/repos",
"events_url": "https://api.github.com/users/jwallat/events{/privacy}",
"received_events_url": "https://api.github.com/users/jwallat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1920687293,
"node_id": "MDU6TGFiZWwxOTIwNjg3Mjkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Fast%20Tokenizers",
"name": "Fast Tokenizers",
"color": "b60205",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Thanks for the extensive bug report! I can confirm that you are correct and that the issue does not occur when using the slow option.\r\n\r\nPinging @n1t0 "
] | 1,589 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BERT FastTokenizer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] my own task or dataset: (give details below)
As far as I can tell, adding tokens (e.g. '[EOS]') to the FastTokenizer will result in all the single characters of the token to be added to the tokenizer ('E', 'O', 'S'). This seems to only occur when using tokenizer.add_special_tokens() as described [here](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.add_special_tokens). However, adding the tokens via the constructor seem to work just fine.
This bug results in problems using the uncased models as we don't want uppercase letters. Also the unwanted additions to the vocab don't seem to show up in len(tokenizer), which results in index of out range errors when feeding into BERT.
This would be a breaking case:
```
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)
print('Tokenizer len before: {}'.format(len(tokenizer)))
num_added = tokenizer.add_special_tokens({'eos_token': '[EOS]', 'bos_token': '[BOS]'})
print('Tokenizer len after: {}'.format(len(tokenizer)))
print('Number tokens added: ', num_added)
print(tokenizer.bos_token)
print(tokenizer.eos_token)
# We can see that the tokens have been added successfully
# However, encoding the same sequence as before, we run into problems:
encoded = tokenizer.encode('This is a big S!')
print(encoded)
print(tokenizer.convert_ids_to_tokens(encoded))
# If you look carefully, you can see that the 'S' in the sequence is not lowercase.
# Also the id in the line above (30526) should not be higher than the tokenizer len (30524)
# If we feed this into bert (after model.resize_token_embeddings(len(tokenizer))) this will crash
with an index out of range exception.
```
Outputs:
```
Tokenizer len before: 30522
Tokenizer len after: 30524
Number tokens added: 2
[BOS]
[EOS]
[101, 2023, 2003, 1037, 2502, 30526, 999, 102]
['[CLS]', 'this', 'is', 'a', 'big', 'S', '!', '[SEP]']
```
Edit: My proposed workaround of adding the special tokens via the constructor also does not work. The tokens are accessible via tokenizer.<eos/bos>_token but adding tokens this way does not change the number of tokens in the vocab. I.e. using len(tokenizer) doesn't reflect the newly added tokens.
## To reproduce
Steps to reproduce the behavior:
Please find this [colab notebook](https://colab.research.google.com/drive/1hMEr0gpbyGJCZvIzFB22eKlmb9I-vKuu?usp=sharing) investigating the bug
## Expected behavior
add_special_tokens() of the fast tokenizer should behave the same as for the regular tokenizer: Only adding the full special tokens '[EOS]' and not also single characters of it. Furthermore, I would expect that if something is added, this would also be reflected in the length of the tokenizer.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Colab, linux
- Python version: 3.7
- PyTorch version (GPU?): 1.5, gpu
- Tensorflow version (GPU?):
- Using GPU in script?: tried both, no difference
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4457/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4456 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4456/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4456/comments | https://api.github.com/repos/huggingface/transformers/issues/4456/events | https://github.com/huggingface/transformers/issues/4456 | 620,955,230 | MDU6SXNzdWU2MjA5NTUyMzA= | 4,456 | Problems About Using the Run_language_modeling with Tf2. | {
"login": "RichardLS09",
"id": 31444570,
"node_id": "MDQ6VXNlcjMxNDQ0NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/31444570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RichardLS09",
"html_url": "https://github.com/RichardLS09",
"followers_url": "https://api.github.com/users/RichardLS09/followers",
"following_url": "https://api.github.com/users/RichardLS09/following{/other_user}",
"gists_url": "https://api.github.com/users/RichardLS09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RichardLS09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RichardLS09/subscriptions",
"organizations_url": "https://api.github.com/users/RichardLS09/orgs",
"repos_url": "https://api.github.com/users/RichardLS09/repos",
"events_url": "https://api.github.com/users/RichardLS09/events{/privacy}",
"received_events_url": "https://api.github.com/users/RichardLS09/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @jplu, might be of interest :)",
"Hello!\r\n\r\nLike this there is not enough details to see what is the issue. Can you provide the piece of code you are trying to run? plz :)\r\n\r\nAlso if you are looking for how to properly use the trainer I suggest you to look at the already existing examples:\r\n\r\n- [Question-answering](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_tf_squad.py)\r\n- [Token classification](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_tf_ner.py)\r\n- [Sequence classification](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_glue.py)\r\n- [Multiple Choice](https://github.com/huggingface/transformers/blob/master/examples/multiple-choice/run_tf_multiple_choice.py)\r\n\r\nThey are all working well with CPU/Single GPU/Multiple GPU, only TPU need to be further tested for now.\r\n\r\nBasically the score is created the first time when you call the `strategy` property of the `tf_training_args.py`. Be careful if you try to translate the PT script to TF there are quite a lot of differences to care about. The training arguments are one of them.\r\n\r\nNevertheless, thanks a lot for trying to make a language modeling with TF2 and will be happy to help in case you need some.",
"Thanks for your reply! @jplu \r\n\r\nSorry for that i'm not looking at the examples.In the examples, it initializes the model with trainng_args.strategy.scope(),and in my script, i initialize the model in TFTrainer.\\__init\\__ with the self.args.strategy.scope(). It looks the same.\r\n\r\nIf we initianize the model in trainng_args.strategy.scope(), it's be ok ! At first, I think we should pass the model_args to TFTrainer, not the model, and initialize the model in \\__init\\__. Therefore i think maybe we should improve the design. \r\n\r\nThanks again for your reply!\r\n"
] | 1,589 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
### Model I am using Bert.
### The problem arises when using:
I'm using the run_language_modling.py to fine-tuning with (Tf2.0.0/Tf2.2.0). Of course i modify the script.
I use TFAutoModelWithLMHead,TFTrainer in this repo to build my script.
When i'm traning,here is the problem.
`ValueError: Trying to create optimizer slot variable under the scope for tf.distribute.Strategy (<tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x15b49bed0>), which is different from the scope used for the original variable (<tf.Variable 'tf_bert_for_masked_lm/bert/embeddings/word_embeddings/weight:0' shape=(21128, 768) dtype=float32, numpy=array(),dtype=float32)>). Make sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope`
I move initializing the model to TFTrainer().args.strategy.scope() and work well!
### Someing to say
- It seems that when using tf2.0, there's some problems about TFTrainer's design. Will you improve this design so that one can use tf2.0 more conveniently in this repo?
- Thanks for this repo and your contribution.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4456/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4455 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4455/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4455/comments | https://api.github.com/repos/huggingface/transformers/issues/4455/events | https://github.com/huggingface/transformers/issues/4455 | 620,892,040 | MDU6SXNzdWU2MjA4OTIwNDA= | 4,455 | get output from a particular layer of pre-trained transformer (xlnet) | {
"login": "mainulquraishi",
"id": 14335238,
"node_id": "MDQ6VXNlcjE0MzM1MjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/14335238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mainulquraishi",
"html_url": "https://github.com/mainulquraishi",
"followers_url": "https://api.github.com/users/mainulquraishi/followers",
"following_url": "https://api.github.com/users/mainulquraishi/following{/other_user}",
"gists_url": "https://api.github.com/users/mainulquraishi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mainulquraishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mainulquraishi/subscriptions",
"organizations_url": "https://api.github.com/users/mainulquraishi/orgs",
"repos_url": "https://api.github.com/users/mainulquraishi/repos",
"events_url": "https://api.github.com/users/mainulquraishi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mainulquraishi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sure, these are called hidden states! Here's the [documentation of the XLNet model](https://huggingface.co/transformers/model_doc/xlnet.html?highlight=output_hidden_states#transformers.XLNetModel).\r\n\r\nPlease note the third return:\r\n\r\n> **hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):**\r\n> \r\n> Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).\r\n> \r\n> Hidden-states of the model at the output of each layer plus the initial embedding outputs.\r\n"
] | 1,589 | 1,590 | 1,590 | NONE | null | As the title, how can I do this in PyTorch version of pre-trained transformer? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4455/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4454 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4454/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4454/comments | https://api.github.com/repos/huggingface/transformers/issues/4454/events | https://github.com/huggingface/transformers/issues/4454 | 620,882,411 | MDU6SXNzdWU2MjA4ODI0MTE= | 4,454 | DMOZ - web page classification / multi-language | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can you check the Dmoz[Dmoz] \r\nthe database in sql he cost 1000$ to change it (https://idmoz.org) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | Hi,
Hope you are all well !
Still quite a newbie with transformers, I wanted to know how could it be possible to build a web page classifier with the DMOZ dump and to classify them into categories in several languages.
Thanks in advance for any insights or inputs on that question.
Cheers,
X | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4454/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4453 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4453/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4453/comments | https://api.github.com/repos/huggingface/transformers/issues/4453/events | https://github.com/huggingface/transformers/issues/4453 | 620,846,173 | MDU6SXNzdWU2MjA4NDYxNzM= | 4,453 | Bug - TFBertForSequenceClassification on SQUaD data | {
"login": "yonatanbitton",
"id": 26148975,
"node_id": "MDQ6VXNlcjI2MTQ4OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/26148975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonatanbitton",
"html_url": "https://github.com/yonatanbitton",
"followers_url": "https://api.github.com/users/yonatanbitton/followers",
"following_url": "https://api.github.com/users/yonatanbitton/following{/other_user}",
"gists_url": "https://api.github.com/users/yonatanbitton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonatanbitton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonatanbitton/subscriptions",
"organizations_url": "https://api.github.com/users/yonatanbitton/orgs",
"repos_url": "https://api.github.com/users/yonatanbitton/repos",
"events_url": "https://api.github.com/users/yonatanbitton/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonatanbitton/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nIf you want to train over SQuAD I suggest you to use the [run_tf_squad.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_tf_squad.py) example that uses the TF Trainer or to check the following [Colab](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=kxZQ9Ms_vSV1) that uses the new `nlp` framework with a `.fit()` method.",
"Hey. \r\nDid you see my examples? \r\nAt \"Try 2\" I explained the problems using the new `nlp` framework with a `.fit()` method\r\nI need to use a custom dataset.\r\n\r\nRegarding `run_tf_squad.py`, I still have problems with it. \r\nI want to use the `fit` method. \r\nI'm using this code instead of the `VFTrainer` in the same file `run_tf_squad.py`.\r\nThis is the only change I made - same dataset, same examples, same features. Just trying to use `fit`. \r\n```python\r\nloss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)\r\n opt = tf.keras.optimizers.Adam(learning_rate=3e-5)\r\n\r\nmodel.compile(optimizer=opt,\r\n loss={'output_1': loss_fn, 'output_2': loss_fn},\r\n loss_weights={'output_1': 1., 'output_2': 1.},\r\n metrics=['accuracy'])\r\n\r\nhistory = model.fit(train_dataset, validation_data=eval_dataset, epochs=1)\r\n```\r\n\r\nAnd it's the same problem that occurs:\r\n```python\r\nTraceback (most recent call last):\r\n File \"/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/run_squad_tf.py\", line 257, in <module>\r\n main()\r\n File \"/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/run_squad_tf.py\", line 242, in main\r\n history = model.fit(train_dataset, validation_data=eval_dataset, epochs=1)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py\", line 819, in fit\r\n use_multiprocessing=use_multiprocessing)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py\", line 235, in fit\r\n use_multiprocessing=use_multiprocessing)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py\", line 593, in _process_training_inputs\r\n use_multiprocessing=use_multiprocessing)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py\", line 706, in _process_inputs\r\n use_multiprocessing=use_multiprocessing)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py\", line 702, in __init__\r\n x = standardize_function(x)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py\", line 660, in standardize_function\r\n standardize(dataset, extract_tensors_from_dataset=False)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py\", line 2360, in _standardize_user_data\r\n self._compile_from_inputs(all_inputs, y_input, x, y)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py\", line 2580, in _compile_from_inputs\r\n target, self.outputs)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py\", line 1341, in cast_if_floating_dtype_and_mismatch\r\n if target.dtype != out.dtype:\r\nAttributeError: 'str' object has no attribute 'dtype'\r\n```\r\n\r\nI will add this to the post as a failing example - Try 4",
"Sorry, misunderstanding, what I meant is that I proposed you to check how the features are built, if you want to use `.fit()` the features have to be built differently than in `squad_convert_examples_to_features`, also you have to use TF 2.2. Otherwise if you want to use this method, you have to pass by the trainer.\r\n\r\nAlso why using `TFBertForSequenceClassification` instead of `TFBertForQuestionAnswering`?",
"Thank you for the answer. I prefare to use `fit`, you dont support it?\r\nAnyway, this is the status with the `VFTrainer`: \r\n\r\nI've used tensorflow 2.1.0 and I've now upgradeed to 2.2.0. \r\nI still have problems: \r\n\r\n```python\r\n trainer = TFTrainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=dev_dataset)\r\n print(f\"Created TFTrainer\")\r\n trainer.train()\r\n```\r\n\r\nIt does create the `TFTrainer`, but when getting to the `.train()` cmd it fails: \r\n```python\r\nCreated TFTrainer\r\nWARNING:tensorflow:From /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py:364: StrategyBase.experimental_run_v2 (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nrenamed to `run`\r\nWARNING:tensorflow:From /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nIf using Keras pass *_constraint arguments to layers.\r\n/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/indexed_slices.py:434: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.\r\n \"Converting sparse IndexedSlices to a dense Tensor of unknown shape. \"\r\nTraceback (most recent call last):\r\n File \"/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_squad_tf_with_trainer.py\", line 112, in <module>\r\n main()\r\n File \"/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_squad_tf_with_trainer.py\", line 34, in main\r\n trainer.train()\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py\", line 277, in train\r\n for training_loss in self._training_steps():\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py\", line 323, in _training_steps\r\n self._apply_gradients()\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 580, in __call__\r\n result = self._call(*args, **kwds)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 627, in _call\r\n self._initialize(args, kwds, add_initializers_to=initializers)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 506, in _initialize\r\n *args, **kwds))\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 2446, in _get_concrete_function_internal_garbage_collected\r\n graph_function, _, _ = self._maybe_define_function(args, kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 2777, in _maybe_define_function\r\n graph_function = self._create_graph_function(args, kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 2667, in _create_graph_function\r\n capture_by_value=self._capture_by_value),\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py\", line 981, in func_graph_from_py_func\r\n func_outputs = python_func(*func_args, **func_kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 441, in wrapped_fn\r\n return weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 3299, in bound_method_wrapper\r\n return wrapped_fn(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py\", line 968, in wrapper\r\n raise e.ag_error_metadata.to_exception(e)\r\nValueError: in user code:\r\n\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py:329 _apply_gradients *\r\n self.args.strategy.experimental_run_v2(self._step)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py:343 _step *\r\n self.optimizer.apply_gradients(list(zip(gradients, vars)))\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/optimization_tf.py:135 apply_gradients *\r\n return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name,)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:478 apply_gradients **\r\n self._create_all_weights(var_list)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:663 _create_all_weights\r\n self._create_slots(var_list)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/adam.py:156 _create_slots\r\n self.add_slot(var, 'm')\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:716 add_slot\r\n .format(strategy, var))\r\n\r\n ValueError: Trying to create optimizer slot variable under the scope for tf.distribute.Strategy (<tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x7f2d141aec50>), which is different from the scope used for the original variable (<tf.Variable 'tf_bert_for_question_answering/bert/embeddings/word_embeddings/weight:0' shape=(28996, 768) dtype=float32, numpy=\r\n array([[-0.00054784, -0.04156886, 0.01308366, ..., -0.0038919 ,\r\n -0.0335485 , 0.0149841 ],\r\n [ 0.01688265, -0.03106827, 0.0042053 , ..., -0.01474032,\r\n -0.03561099, -0.0036223 ],\r\n [-0.00057234, -0.02673604, 0.00803954, ..., -0.01002474,\r\n -0.0331164 , -0.01651673],\r\n ...,\r\n [-0.00643814, 0.01658491, -0.02035619, ..., -0.04178825,\r\n -0.049201 , 0.00416085],\r\n [-0.00483562, -0.00267701, -0.02901638, ..., -0.05116647,\r\n 0.00449265, -0.01177113],\r\n [ 0.03134822, -0.02974372, -0.02302896, ..., -0.01454749,\r\n -0.05249038, 0.02843569]], dtype=float32)>). Make sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope\r\n```\r\nThank you",
"This error means that you haven't created the model in the proper scope. Did you use the scope created in the TrainerArgs?\r\n\r\nWhat gives you the following command line without touching to the initial code:\r\n```\r\npython examples/question-answering/run_tf_squad.py \\\r\n --model_name_or_path bert-base-uncased \\\r\n --output_dir model \\\r\n --max-seq-length 384 \\\r\n --num_train_epochs 2 \\\r\n --per_gpu_train_batch_size 8 \\\r\n --per_gpu_eval_batch_size 16 \\\r\n --do_train \\\r\n --logging_dir logs \\\r\n --mode question-answering \\\r\n --logging_steps 10 \\\r\n --learning_rate 3e-5 \\\r\n --doc_stride 128 \\\r\n --optimizer_name adamw\r\n```",
"That code works, **but** I need one extra thing: evaluation/prediction on test dataset, and it doesn't work for me. \r\n\r\nI took the `run_tf_squad.py` and added simple changes: \r\n```python\r\ntest_examples = processor.get_dev_examples(data_args.data_dir, filename='test-v1.1.json')\r\ntest_dataset = (\r\n squad_convert_examples_to_features(\r\n examples=test_examples,\r\n tokenizer=tokenizer,\r\n max_seq_length=data_args.max_seq_length,\r\n doc_stride=data_args.doc_stride,\r\n max_query_length=data_args.max_query_length,\r\n is_training=False,\r\n return_dataset=\"tf\",\r\n )\r\n )\r\n```\r\n\r\nThat is, only adding the test dataset.\r\nNow I want to evalute my final model on it. I tried with both predict and evaluate and it doesn't work. \r\n\r\nTry 1 - \r\n```python\r\nresults = trainer.evaluate(test_dataset)\r\n```\r\nTrace:\r\n```python\r\n05/24/2020 10:55:39 - INFO - transformers.trainer_tf - ***** Running Evaluation *****\r\n05/24/2020 10:55:39 - INFO - transformers.trainer_tf - Batch size = 16\r\nTraceback (most recent call last):\r\n File \"/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_run_squad_tf_with_trainer.py\", line 208, in <module>\r\n main()\r\n File \"/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_run_squad_tf_with_trainer.py\", line 203, in main\r\n # results = trainer.evaluate(test_dataset)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py\", line 246, in evaluate\r\n output = self._prediction_loop(eval_dataset, description=\"Evaluation\")\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py\", line 195, in _prediction_loop\r\n loss, logits = self._evaluate_steps(features, labels)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 580, in __call__\r\n result = self._call(*args, **kwds)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 627, in _call\r\n self._initialize(args, kwds, add_initializers_to=initializers)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 506, in _initialize\r\n *args, **kwds))\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 2446, in _get_concrete_function_internal_garbage_collected\r\n graph_function, _, _ = self._maybe_define_function(args, kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 2777, in _maybe_define_function\r\n graph_function = self._create_graph_function(args, kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 2667, in _create_graph_function\r\n capture_by_value=self._capture_by_value),\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py\", line 981, in func_graph_from_py_func\r\n func_outputs = python_func(*func_args, **func_kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 441, in wrapped_fn\r\n return weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 3299, in bound_method_wrapper\r\n return wrapped_fn(*args, **kwargs)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py\", line 968, in wrapper\r\n raise e.ag_error_metadata.to_exception(e)\r\nValueError: in user code:\r\n\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py:171 _evaluate_steps *\r\n per_replica_loss, per_replica_logits = self.args.strategy.experimental_run_v2(\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py:400 _run_model *\r\n logits = self.model(features, training=training)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/modeling_tf_bert.py:1163 call *\r\n outputs = self.bert(inputs, **kwargs)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/modeling_tf_bert.py:548 call *\r\n extended_attention_mask = attention_mask[:, tf.newaxis, tf.newaxis, :]\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py:984 _slice_helper\r\n name=name)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py:1150 strided_slice\r\n shrink_axis_mask=shrink_axis_mask)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py:10179 strided_slice\r\n shrink_axis_mask=shrink_axis_mask, name=name)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:744 _apply_op_helper\r\n attrs=attr_protos, op_def=op_def)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py:595 _create_op_internal\r\n compute_device)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:3327 _create_op_internal\r\n op_def=op_def)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:1817 __init__\r\n control_input_ops, op_def)\r\n /home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:1657 _create_c_op\r\n raise ValueError(str(e))\r\n\r\n ValueError: Index out of range using input dim 1; input has only 1 dims for '{{node tf_bert_for_question_answering/bert/strided_slice}} = StridedSlice[Index=DT_INT32, T=DT_INT32, begin_mask=9, ellipsis_mask=0, end_mask=9, new_axis_mask=6, shrink_axis_mask=0](per_replica_features, tf_bert_for_question_answering/bert/strided_slice/stack, tf_bert_for_question_answering/bert/strided_slice/stack_1, tf_bert_for_question_answering/bert/strided_slice/stack_2)' with input shapes: [128], [4], [4], [4] and with computed input tensors: input[3] = <1 1 1 1>.\r\n```\r\n\r\nTry 2: \r\n```python\r\npredictions = trainer.predict(test_dataset)\r\n```\r\nTrace:\r\n```python\r\n05/24/2020 11:06:50 - INFO - transformers.trainer_tf - ***** Running Prediction *****\r\n05/24/2020 11:06:50 - INFO - transformers.trainer_tf - Batch size = 16\r\nTraceback (most recent call last):\r\n File \"/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_run_squad_tf_with_trainer.py\", line 208, in <module>\r\n main()\r\n File \"/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/my_run_squad_tf_with_trainer.py\", line 201, in main\r\n predictions = trainer.predict(test_dataset)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py\", line 430, in predict\r\n return self._prediction_loop(test_dataset, description=\"Prediction\")\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/trainer_tf.py\", line 213, in _prediction_loop\r\n preds = logits.numpy()\r\nAttributeError: 'tuple' object has no attribute 'numpy'\r\n\r\n```",
"That's normal, the evaluation/prediction are not implemented yet. I have to make the example compliant with the SQuAD metric from the `nlp` framework. It means that for now only training is possible.\r\n\r\nBut if you want to make this integration yourself and do a PR, you are very welcome to do it :) Otherwise I think to be able to do it in the next 2 coming weeks. Really sorry for that.",
"Thank you for the answers. \r\n\r\nThat's why I tried to use the normal tensorflow `fit` and `predict` methods as shown here https://blog.tensorflow.org/2019/11/hugging-face-state-of-art-natural.html.\r\n\r\nBasically I just want to do training and evaluation during training, and then testing on the test dataset.\r\nI succeed to do it with the pytorch model (`run_squad.py`), and I now tried to do it with the tensorflow model as well. If it will be implemented in the future it will be great, I will wait.\r\n\r\nThanks :) 👍 ",
"I very quickly coded this so it is not really tested but it can gives you an idea of how to use `.fit()` method. It is based on the Colab version proposed for the `nlp` framework.\r\n\r\n```python\r\nfrom transformers import (\r\n BertTokenizerFast,\r\n TFBertForQuestionAnswering,\r\n)\r\nimport tensorflow_datasets as tfds\r\nimport tensorflow as tf\r\n\r\nds = tfds.load(\"squad\")\r\n\r\ntokenizer = BertTokenizerFast.from_pretrained(\"bert-base-cased\")\r\nmodel = TFBertForQuestionAnswering.from_pretrained(\"bert-base-cased\")\r\n\r\ndef get_correct_alignement(context, gold_text, start_idx):\r\n end_idx = start_idx + len(gold_text)\r\n if context[start_idx:end_idx] == gold_text:\r\n return start_idx, end_idx # When the gold label position is good\r\n elif context[start_idx-1:end_idx-1] == gold_text:\r\n return start_idx-1, end_idx-1 # When the gold label is off by one character\r\n elif context[start_idx-2:end_idx-2] == gold_text:\r\n return start_idx-2, end_idx-2 # When the gold label is off by two character\r\n else:\r\n raise ValueError()\r\n\r\ndef convert_to_tf_features(example, training=True):\r\n encodings = tokenizer.encode_plus(example[\"context\"].numpy().decode(\"utf-8\"), example[\"question\"].numpy().decode(\"utf-8\"), pad_to_max_length=True, max_length=512)\r\n start_positions, end_positions = [], []\r\n \r\n if training:\r\n start_idx, end_idx = get_correct_alignement(example[\"context\"].numpy().decode(\"utf-8\"), example[\"answers\"][\"text\"][0].numpy().decode(\"utf-8\"), example[\"answers\"][\"answer_start\"][0].numpy())\r\n start = encodings.char_to_token(0, start_idx)\r\n end = encodings.char_to_token(0, end_idx-1)\r\n \r\n if start is None or end is None:\r\n return None, None\r\n \r\n start_positions.append(start)\r\n end_positions.append(end)\r\n else:\r\n for i, start, text in enumerate(zip(example[\"answers\"][\"answer_start\"], example[\"answers\"][\"text\"])):\r\n start_idx, end_idx = get_correct_alignement(example[\"context\"].numpy().decode(\"utf-8\"), example[\"context\"].numpy().decode(\"utf-8\"), text.numpy().decode(\"utf-8\"), start.numpy())\r\n \r\n start = encodings.char_to_token(0, start_idx)\r\n end = encodings.char_to_token(0, end_idx-1)\r\n \r\n if start is None or end is None:\r\n return None, None\r\n \r\n start_positions.append(start)\r\n end_positions.append(end)\r\n \r\n if start_positions and end_positions:\r\n encodings.update({'output_1': start_positions,\r\n 'output_2': end_positions})\r\n \r\n return encodings, {'output_1': start_positions, 'output_2': end_positions}\r\n\r\ntrain_features = {}\r\ntrain_labels = {}\r\nfor item in ds[\"train\"]:\r\n feature, label = convert_to_tf_features(item)\r\n if feature is not None and label is not None:\r\n for k, v in feature.items():\r\n train_features.setdefault(k, []).append(v)\r\n for k, v in label.items():\r\n train_labels.setdefault(k, []).append(v)\r\n\r\ntrain_tfdataset = tf.data.Dataset.from_tensor_slices((train_features, train_labels)).batch(8)\r\n\r\nloss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)\r\nopt = tf.keras.optimizers.Adam(learning_rate=3e-5)\r\nmodel.compile(optimizer=opt,\r\n loss={'output_1': loss_fn, 'output_2': loss_fn},\r\n loss_weights={'output_1': 1., 'output_2': 1.},\r\n metrics=['accuracy'])\r\n\r\nmodel.fit(train_tfdataset, epochs=1, steps_per_epoch=3)\r\n```",
"Thank for the help :) \r\n\r\nI've succeded to use your code as reference with my dataset, converting examples to features: \r\n\r\n```python\r\ndef get_tf_dataset(args, processor, tokenizer, dataset_type):\r\n filename_by_case = {'train': args.train_file, 'dev': args.dev_file, 'test': args.test_file}\r\n func_by_case = {'train': processor.get_train_examples, 'dev': processor.get_dev_examples, 'test': processor.get_dev_examples}\r\n examples = func_by_case[dataset_type](args.data_dir, filename=filename_by_case[dataset_type])\r\n\r\n train_features = {}\r\n train_labels = {}\r\n for item in examples:\r\n feature, label = convert_to_tf_features(item, tokenizer)\r\n if feature is not None and label is not None:\r\n for k, v in feature.items():\r\n train_features.setdefault(k, []).append(v)\r\n for k, v in label.items():\r\n train_labels.setdefault(k, []).append(v)\r\n\r\n tfdataset = tf.data.Dataset.from_tensor_slices((train_features, train_labels)).batch(8)\r\n return tfdataset\r\n\r\n\r\ndef convert_to_tf_features(example, tokenizer, training=True):\r\n context = example.context_text # example[\"context\"].numpy().decode(\"utf-8\")\r\n question = example.question_text # example[\"question\"].numpy().decode(\"utf-8\")\r\n encodings = tokenizer.encode_plus(context, question, pad_to_max_length=True, max_length=512)\r\n start_positions, end_positions = [], []\r\n\r\n first_answer = example.answers[0] if len(example.answers) > 0 else \"\" # example[\"answers\"][\"text\"][0].numpy().decode(\"utf-8\")\r\n first_answer_start = example.start_position # example[\"answers\"][\"answer_start\"][0].numpy()\r\n start_idx, end_idx = get_correct_alignement(context,\r\n first_answer,\r\n first_answer_start)\r\n start = encodings.char_to_token(0, start_idx)\r\n end = encodings.char_to_token(0, end_idx - 1) if end_idx > 0 else 0\r\n\r\n if start is None or end is None:\r\n return None, None\r\n\r\n start_positions.append(start)\r\n end_positions.append(end)\r\n\r\n if start_positions and end_positions:\r\n encodings.update({'output_1': start_positions,\r\n 'output_2': end_positions})\r\n\r\n return encodings, {'output_1': start_positions, 'output_2': end_positions}\r\n```\r\n\r\nI will check how to deal with the impossible answers by another references. In this example its empty string \"\" when no answer and `end_position = 0`. Thanks.",
"Hi, how did you solve the **Try 1** problem? \r\nAttributeError: 'NoneType' object has no attribute 'strip'"
] | 1,589 | 1,592 | 1,590 | NONE | null | # 🐛 Bug
## Information
I'm using TFBertForSequenceClassification on SQUaD data v1 data.
The problem arises when using:
* [ ] Both official example scripts and my own modified scripts
The tasks I am working on is:
* [ ] an official SQUaD v1 data and my own SQUaD v1 data.
## To reproduce
### Try 1 - with official squad via `tensorflow_datasets.load("squad")`, trying to mimic the following official reference -
https://github.com/huggingface/transformers#quick-tour-tf-20-training-and-pytorch-interoperability
```python
import tensorflow as tf
from transformers import TFBertForSequenceClassification, BertTokenizer, \
squad_convert_examples_to_features, SquadV1Processor
import tensorflow_datasets
model = TFBertForSequenceClassification.from_pretrained("bert-base-cased")
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
data = tensorflow_datasets.load("squad")
processor = SquadV1Processor()
examples = processor.get_examples_from_dataset(data, evaluate=False)
dataset_features = squad_convert_examples_to_features(examples=examples, tokenizer=tokenizer, max_seq_length=384, doc_stride=128, max_query_length=64, is_training=True, return_dataset='tf')
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'start_position': loss_fn, 'end_position': loss_fn},
loss_weights={'start_position': 1., 'end_position': 1.},
metrics=['accuracy'])
model.fit(dataset_features, epochs=3)
```
**Stacktrace:** - the bug is at the `squad_convert_examples_to_features` part
```python
convert squad examples to features: 0%| | 0/10570 [00:00<?, ?it/s]
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/data/processors/squad.py", line 95, in squad_convert_example_to_features
cleaned_answer_text = " ".join(whitespace_tokenize(example.answer_text))
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/tokenization_bert.py", line 112, in whitespace_tokenize
text = text.strip()
AttributeError: 'NoneType' object has no attribute 'strip'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/examples_git/huggingface_tf_example_squad.py", line 18, in <module>
dataset_features = squad_convert_examples_to_features(examples=examples, tokenizer=tokenizer, max_seq_length=384, doc_stride=128, max_query_length=64, is_training=True, return_dataset='tf')
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/transformers/data/processors/squad.py", line 327, in squad_convert_examples_to_features
disable=not tqdm_enabled,
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/multiprocessing/pool.py", line 320, in <genexpr>
return (item for chunk in result for item in chunk)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/multiprocessing/pool.py", line 735, in next
raise value
AttributeError: 'NoneType' object has no attribute 'strip'
```
### Try 2 - readine data from file, trying to mimic the following official reference- https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb
```python
import tensorflow as tf
from transformers import TFBertForSequenceClassification, BertTokenizer, \
squad_convert_examples_to_features, SquadV1Processor
import tensorflow_datasets
model = TFBertForSequenceClassification.from_pretrained("bert-base-cased")
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
data = tensorflow_datasets.load("squad", data_dir='/data/users/yonatab/zero_shot_data/datasets_refs')
processor = SquadV1Processor()
examples = processor.get_examples_from_dataset(data, evaluate=True)
dataset_features = squad_convert_examples_to_features(examples=examples, tokenizer=tokenizer, max_seq_length=384, doc_stride=128, max_query_length=64, is_training=True, return_dataset='tf')
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'start_position': loss_fn, 'end_position': loss_fn},
loss_weights={'start_position': 1., 'end_position': 1.},
metrics=['accuracy'])
model.fit(dataset_features, epochs=3)
```
**Stacktrace:** - the bug is at the `fit` method
```python
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/examples_git/minimal_example_for_git.py", line 97, in <module>
main()
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/examples_git/minimal_example_for_git.py", line 69, in main
history = model.fit(tfdataset, epochs=1, steps_per_epoch=3)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 706, in _process_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py", line 702, in __init__
x = standardize_function(x)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 660, in standardize_function
standardize(dataset, extract_tensors_from_dataset=False)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2580, in _compile_from_inputs
target, self.outputs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1341, in cast_if_floating_dtype_and_mismatch
if target.dtype != out.dtype:
AttributeError: 'str' object has no attribute 'dtype'
```
### Try 3
```python
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
processor = SquadV1Processor()
examples = processor.get_train_examples(args.data_dir, filename=args.train_file)
dataset_features = squad_convert_examples_to_features(examples=examples, tokenizer=tokenizer, max_seq_length=384,
doc_stride=128, max_query_length=64, is_training=True,
return_dataset='tf')
model = TFBertForQuestionAnswering.from_pretrained("bert-base-cased")
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'start_position': loss_fn, 'end_position': loss_fn},
loss_weights={'start_position': 1., 'end_position': 1.},
metrics=['accuracy'])
history = model.fit(dataset_features, epochs=1)
```
**Stacktrace:** - the bug is at the `fit` method
```python
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/examples_git/reading_from_file.py", line 39, in <module>
main()
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/examples_git/reading_from_file.py", line 32, in main
history = model.fit(dataset_features, epochs=1)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 706, in _process_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py", line 702, in __init__
x = standardize_function(x)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 660, in standardize_function
standardize(dataset, extract_tensors_from_dataset=False)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2580, in _compile_from_inputs
target, self.outputs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1341, in cast_if_floating_dtype_and_mismatch
if target.dtype != out.dtype:
AttributeError: 'str' object has no attribute 'dtype'
```
### Try 4 - (after first comment here)
I'm using the code of `run_tf_squad.py` and instead of the `VFTrainer` i'm trying to use `fit`.
This is the only change I made - same dataset, same examples, same features. Just trying to use `fit`.
```python
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'output_1': loss_fn, 'output_2': loss_fn},
loss_weights={'output_1': 1., 'output_2': 1.},
metrics=['accuracy'])
history = model.fit(train_dataset, validation_data=eval_dataset, epochs=1)
```
And it's the same problem that occurs:
```python
Traceback (most recent call last):
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/run_squad_tf.py", line 257, in <module>
main()
File "/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/run_squad_tf.py", line 242, in main
history = model.fit(train_dataset, validation_data=eval_dataset, epochs=1)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 706, in _process_inputs
use_multiprocessing=use_multiprocessing)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py", line 702, in __init__
x = standardize_function(x)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 660, in standardize_function
standardize(dataset, extract_tensors_from_dataset=False)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2580, in _compile_from_inputs
target, self.outputs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1341, in cast_if_floating_dtype_and_mismatch
if target.dtype != out.dtype:
AttributeError: 'str' object has no attribute 'dtype'
```
## Expected behavior
I want to be able to use fit on my own squad data.
## Environment info
- `transformers` version: 2.9.1
- Platform: Linux
- Python version: 3.6.6
- PyTorch version (GPU?): - Using tensorflow
- Tensorflow version (GPU?): 2.1.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Edit:
Keras has a new tutorial for it:
https://keras.io/examples/nlp/text_extraction_with_bert/
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4453/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4452 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4452/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4452/comments | https://api.github.com/repos/huggingface/transformers/issues/4452/events | https://github.com/huggingface/transformers/issues/4452 | 620,751,246 | MDU6SXNzdWU2MjA3NTEyNDY= | 4,452 | Value matrix of self-attention | {
"login": "jmamou",
"id": 19263306,
"node_id": "MDQ6VXNlcjE5MjYzMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19263306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmamou",
"html_url": "https://github.com/jmamou",
"followers_url": "https://api.github.com/users/jmamou/followers",
"following_url": "https://api.github.com/users/jmamou/following{/other_user}",
"gists_url": "https://api.github.com/users/jmamou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmamou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmamou/subscriptions",
"organizations_url": "https://api.github.com/users/jmamou/orgs",
"repos_url": "https://api.github.com/users/jmamou/repos",
"events_url": "https://api.github.com/users/jmamou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmamou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4452/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/4451 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4451/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4451/comments | https://api.github.com/repos/huggingface/transformers/issues/4451/events | https://github.com/huggingface/transformers/issues/4451 | 620,692,195 | MDU6SXNzdWU2MjA2OTIxOTU= | 4,451 | ❓ Warning : This overload of addcdiv_ is deprecated | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not expected, but shouldn't be an issue. Feel free to open a PR swapping args in https://github.com/huggingface/transformers/blob/31eedff5a0fc47d60609089627af6698c21da88d/src/transformers/optimization.py#L165"
] | 1,589 | 1,590 | 1,590 | CONTRIBUTOR | null | # ❓ Questions & Help
When running the [official Colab example of GLUE](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb), during training I receive a `UserWarning` :
```
/pytorch/torch/csrc/utils/python_arg_parser.cpp:756: UserWarning: This overload of addcdiv_ is deprecated:
addcdiv_(Number value, Tensor tensor1, Tensor tensor2)
Consider using one of the following signatures instead:
addcdiv_(Tensor tensor1, Tensor tensor2, *, Number value)
```
---
**Is it expected ?**
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4451/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4450 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4450/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4450/comments | https://api.github.com/repos/huggingface/transformers/issues/4450/events | https://github.com/huggingface/transformers/pull/4450 | 620,641,805 | MDExOlB1bGxSZXF1ZXN0NDE5ODQ4OTM5 | 4,450 | [Trainer] move model to device before setting optimizer | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | Fixes #4240
Thanks @shaoyent for diagnosing the issue | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4450/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4450",
"html_url": "https://github.com/huggingface/transformers/pull/4450",
"diff_url": "https://github.com/huggingface/transformers/pull/4450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4450.patch",
"merged_at": 1589858013000
} |
https://api.github.com/repos/huggingface/transformers/issues/4449 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4449/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4449/comments | https://api.github.com/repos/huggingface/transformers/issues/4449/events | https://github.com/huggingface/transformers/issues/4449 | 620,639,227 | MDU6SXNzdWU2MjA2MzkyMjc= | 4,449 | [Questions & Help] The loss doesn't decrease correctly while training BERT from scratch | {
"login": "ghrua",
"id": 16100433,
"node_id": "MDQ6VXNlcjE2MTAwNDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/16100433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghrua",
"html_url": "https://github.com/ghrua",
"followers_url": "https://api.github.com/users/ghrua/followers",
"following_url": "https://api.github.com/users/ghrua/following{/other_user}",
"gists_url": "https://api.github.com/users/ghrua/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghrua/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghrua/subscriptions",
"organizations_url": "https://api.github.com/users/ghrua/orgs",
"repos_url": "https://api.github.com/users/ghrua/repos",
"events_url": "https://api.github.com/users/ghrua/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghrua/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I am now using [huggingface/transformers](https://github.com/huggingface/transformers) to train a BERT model on **1m** wiki data **from scratch**, but the training loss looks so weird. Before showing the details of the training process, I would first share the scripts and configs I used:
```
python run_language_modeling.py --output_dir $OUTPUT_DIR \
--model_type bert \
--mlm \
--config_name $CONFIG_AND_DATA_DIR \
--tokenizer_name $CONFIG_AND_DATA_DIR \
--do_train \
--do_eval \
--num_train_epochs 20 \
--learning_rate 1e-4 \
--save_steps 250 \
--per_gpu_train_batch_size 64 \
--evaluate_during_training \
--seed 404 \
--block_size 256 \
--train_data_file $DATA_DIR/train.txt \
--eval_data_file $DATA_DIR/valid.txt \
--evaluate_during_training \
--logging_steps 250 > log.bert
```
where [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) is the python script provided by huggingface. I didn't change the config of BERT, except for the vocabulary size. The vocabulary, or the tokenizer, was trained using [huggingface/tokenizers](https://github.com/huggingface/tokenizers).
I put the log of training loss [here](https://gist.github.com/ghrua/01fd859707923f80f1e16af5c2bd3f6a). We can see that, after 20 epochs, the training loss decreases from `7.84` to `7.60`. That's so weird, since the number of lines of the raw data is just 1 million, the training loss should decrease sharply with such an amount of data. Note that I also used the same python script to train a GPT-2 from scratch with the same data, and it worked very well, the loss decreased as expected.
I have tried several ways to address this issue:
1. Set the batch size as big as the GPU can afford. Since BERT just predicts 15% tokens at each time step, big batch size may give the model more error signals while training. However, it doesn't help.
2. Maybe the learning rate is too small. I tried to adjust the `learning_rate` to 5e-4, unfortunately, the converged loss even became worse.
3. Maybe the `vocab.txt` has some problem, which is extracted from my training data using the toolkit [huggingface/tokenizer](https://github.com/huggingface/tokenizers). Then, I used the `vocab.txt` and `config.json` downloaded from this repo to run the python script, but it met the same problem.
4. I also run the same script on 2k data. I trained the BERT model for 200 epochs, and the converged training loss was at around `6.8`, which cannot even overfit on a toy dataset.
Thanks for your kindly help!!!
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
[Link to the question asked on SO](https://stackoverflow.com/questions/61873435/the-loss-doesnt-decrease-correctly-while-training-bert-from-scratch)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4449/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4448 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4448/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4448/comments | https://api.github.com/repos/huggingface/transformers/issues/4448/events | https://github.com/huggingface/transformers/pull/4448 | 620,634,592 | MDExOlB1bGxSZXF1ZXN0NDE5ODQzMjc2 | 4,448 | Correct TF formatting to exclude LayerNorms from weight decay | {
"login": "oliverastrand",
"id": 24825393,
"node_id": "MDQ6VXNlcjI0ODI1Mzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/24825393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverastrand",
"html_url": "https://github.com/oliverastrand",
"followers_url": "https://api.github.com/users/oliverastrand/followers",
"following_url": "https://api.github.com/users/oliverastrand/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverastrand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverastrand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverastrand/subscriptions",
"organizations_url": "https://api.github.com/users/oliverastrand/orgs",
"repos_url": "https://api.github.com/users/oliverastrand/repos",
"events_url": "https://api.github.com/users/oliverastrand/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverastrand/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nThanks for the fix! Just one suggestion above :)",
"Thanks for the feedback! Sounds like a good idea, added that. Are the failing ci tests an issue? Everything passes on my machine. ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=h1) Report\n> Merging [#4448](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2d184cb553ee20943b03b253f44300e466357871&el=desc) will **increase** coverage by `0.84%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4448 +/- ##\n==========================================\n+ Coverage 77.30% 78.14% +0.84% \n==========================================\n Files 120 120 \n Lines 20027 20027 \n==========================================\n+ Hits 15481 15651 +170 \n+ Misses 4546 4376 -170 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `79.24% <ø> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.08% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.25% <0.00%> (+1.10%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.73% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.81% <0.00%> (+2.62%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=footer). Last update [2d184cb...d99d65c](https://codecov.io/gh/huggingface/transformers/pull/4448?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Awesome! LGTM :)\r\n\r\n/cc @julien-c and @LysandreJik "
] | 1,589 | 1,590 | 1,590 | CONTRIBUTOR | null | Fixes #4360
Layer Norm is formated in the wrong way for TensorFlow. This causes it not to be excluded from weight decay in the [run_tf_glue.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_glue.py) script.
This pr simply formats the string to fit the TensorFlow naming.
Not sure if you want a test for this? One option would be to check that all elements in `_exclude_from_weight_decay` trigger a regexp match. But seems a bit overkill. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4448/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4448",
"html_url": "https://github.com/huggingface/transformers/pull/4448",
"diff_url": "https://github.com/huggingface/transformers/pull/4448.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4448.patch",
"merged_at": 1590007560000
} |
https://api.github.com/repos/huggingface/transformers/issues/4447 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4447/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4447/comments | https://api.github.com/repos/huggingface/transformers/issues/4447/events | https://github.com/huggingface/transformers/issues/4447 | 620,560,911 | MDU6SXNzdWU2MjA1NjA5MTE= | 4,447 | TF Beam Search generation seems to be flaky sometimes | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,589 | 1,591 | 1,591 | MEMBER | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): ALL TF generate models
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
* [ ] all generate beam search tests in TF
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Some commits are failing due to `Beam size should alway be full` in circle ci - this should actually never happen. See a failed circle ci here: https://circleci.com/gh/huggingface/transformers/39780?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link .
## Expected behavior
Circle ci should not fail with this message. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4447/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4446 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4446/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4446/comments | https://api.github.com/repos/huggingface/transformers/issues/4446/events | https://github.com/huggingface/transformers/pull/4446 | 620,559,468 | MDExOlB1bGxSZXF1ZXN0NDE5Nzg2MDA2 | 4,446 | Make get_last_lr in trainer backward compatible | {
"login": "rakeshchada",
"id": 2664691,
"node_id": "MDQ6VXNlcjI2NjQ2OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2664691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rakeshchada",
"html_url": "https://github.com/rakeshchada",
"followers_url": "https://api.github.com/users/rakeshchada/followers",
"following_url": "https://api.github.com/users/rakeshchada/following{/other_user}",
"gists_url": "https://api.github.com/users/rakeshchada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rakeshchada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rakeshchada/subscriptions",
"organizations_url": "https://api.github.com/users/rakeshchada/orgs",
"repos_url": "https://api.github.com/users/rakeshchada/repos",
"events_url": "https://api.github.com/users/rakeshchada/events{/privacy}",
"received_events_url": "https://api.github.com/users/rakeshchada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=h1) Report\n> Merging [#4446](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42e8fbfc51ae4990b24a3c92fa0c5d3481dfc821&el=desc) will **increase** coverage by `0.85%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4446 +/- ##\n==========================================\n+ Coverage 77.16% 78.02% +0.85% \n==========================================\n Files 120 120 \n Lines 20087 20088 +1 \n==========================================\n+ Hits 15501 15673 +172 \n+ Misses 4586 4415 -171 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.60% <50.00%> (+1.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.51% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.73% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.81% <0.00%> (+2.62%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=footer). Last update [42e8fbf...740126d](https://codecov.io/gh/huggingface/transformers/pull/4446?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Fixes https://github.com/huggingface/transformers/issues/3959 .
@julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4446/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4446",
"html_url": "https://github.com/huggingface/transformers/pull/4446",
"diff_url": "https://github.com/huggingface/transformers/pull/4446.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4446.patch",
"merged_at": 1589847456000
} |
https://api.github.com/repos/huggingface/transformers/issues/4445 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4445/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4445/comments | https://api.github.com/repos/huggingface/transformers/issues/4445/events | https://github.com/huggingface/transformers/issues/4445 | 620,435,716 | MDU6SXNzdWU2MjA0MzU3MTY= | 4,445 | Generation with EncoderDecoder Model | {
"login": "manzar96",
"id": 38495091,
"node_id": "MDQ6VXNlcjM4NDk1MDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/38495091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manzar96",
"html_url": "https://github.com/manzar96",
"followers_url": "https://api.github.com/users/manzar96/followers",
"following_url": "https://api.github.com/users/manzar96/following{/other_user}",
"gists_url": "https://api.github.com/users/manzar96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manzar96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manzar96/subscriptions",
"organizations_url": "https://api.github.com/users/manzar96/orgs",
"repos_url": "https://api.github.com/users/manzar96/repos",
"events_url": "https://api.github.com/users/manzar96/events{/privacy}",
"received_events_url": "https://api.github.com/users/manzar96/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Do we have any updates on this issue?",
"I will take a look at this at the end of next week - will get to you! ",
"> I will take a look at this at the end of next week - will get to you!\r\n\r\nThanks a lot!",
"Hi @manzar96,\r\nMultiple bugs were fixed in #4680 . Can you please take a look whether this error persists?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
Hi,
I am using the EncoderDecoder Model and I would like to use the generate method for sequence generation. As I have read in docs, the generate method can be used with any Pretrained HF model with LM head on top.
So wrapping a pretrained LM model (e.g. GPT2LMHeadModel, BertForMaskedLM) (and having a encoder too) with the Encoder-Decoder class gives null bos_token_id and the generate method does not work properly. However using only a pretrained LM model (without wrapping with the Encoder-Decoder class) gives a valid bos_token_id (because the config file contains bos_token_id).
How I should handle the above issue?
Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4445/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4444/comments | https://api.github.com/repos/huggingface/transformers/issues/4444/events | https://github.com/huggingface/transformers/issues/4444 | 620,400,246 | MDU6SXNzdWU2MjA0MDAyNDY= | 4,444 | model.save() does not save keras model that includes DIstillBert layer | {
"login": "msahamed",
"id": 8838524,
"node_id": "MDQ6VXNlcjg4Mzg1MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8838524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/msahamed",
"html_url": "https://github.com/msahamed",
"followers_url": "https://api.github.com/users/msahamed/followers",
"following_url": "https://api.github.com/users/msahamed/following{/other_user}",
"gists_url": "https://api.github.com/users/msahamed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/msahamed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msahamed/subscriptions",
"organizations_url": "https://api.github.com/users/msahamed/orgs",
"repos_url": "https://api.github.com/users/msahamed/repos",
"events_url": "https://api.github.com/users/msahamed/events{/privacy}",
"received_events_url": "https://api.github.com/users/msahamed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Same issue",
"Hi, we don't fully support saving/loading these models using keras' save/load methods (yet). In the meantime, please use `model.from_pretrained` or `model.save_pretrained`, which also saves the configuration file.",
"Hello @LysandreJik , \r\nThank you for the information. \r\n\r\nCould you point me a direction and tell me a little more about the implementation procedure, so that I could do research and possibly implement the methods? If everything goes well, I could make a pull request that might benefit others as well. \r\n\r\nSabber",
"I had this exact error. I got around it by saving the weights and the code that creates the model. After training your model, run`model.save_weights('path/savefile')`. Note there is no .h5 on it.\r\n\r\nWhen you want to reuse the model later, run your code until `model.compile()`. Then, `model.load_weights('path/savefile')`. ",
"Thanks, works perfectly",
"Does this work now with newer versions?",
"I am also facing same issue. Any solution.",
"The issue still occurs on TF 2.6.0 which is very disappointing.\r\nI tried training on Colab's TPU and on GPU. \r\n\r\n- For TPU case I did not find a way to save & then load model properly;\r\n- For GPU case model.save() throws 'NotImplemented' error. However, saving weights and then loading them into a compiled model works:\r\n\r\n1. Save weights, either with callbacks or with `model.save_weights`;\r\n2. When you need the model for inference, firstly create the model of the same architecture that was used for training (I packed everything into a create_model() function to ensure the architecture is the same)\r\n3. Compile the model\r\n4. Use `model.load_weights`\r\n",
"cc @Rocketknight1 ",
"This still occurs, not only with distilbert but also many others. I don't see why this issue was closed - The described workaround is quite cumbersome and error-prone, and I don't see why this cannot be implemented inside the library, given that the configuration should already be in place to allow overriding get_config / from_config methods?",
"Hi, TF maintainer here! You're right, and we're going to reopen this one. We're very constrained on time right now, though - I'll try to investigate it as soon as I get the chance.",
"Thanks for reopening this. I think i was able to work around it by using the model.distilbert property, which itself is the base layer. Maybe it would be as simple as returning the base layers get_config/from_config with some tweaks?",
"@Zahlii You are correct - the underlying issue is simply that `get_config` and `from_config` were never implemented correctly for most Transformers models! We only got away with it for this long because a lot of the standard training setups never called them. We're working on a PR right now.",
"We've attempted a patch at #14361 - if anyone has any suggestions, or wants to try it out, please let us know! You can test the PR branch with `pip install git+https://github.com/huggingface/transformers.git@add_get_config`",
"The patch has now been merged. It'll be in the next release, or if anyone else is encountering this issue before then, you can install from master with `pip install git+https://github.com/huggingface/transformers.git`",
"Since the patch in https://github.com/huggingface/transformers/pull/14361 has been reverted, is there a timeline for a fix? (Or is there a known workaround one could use?) Thanks :) ",
"@skbaur Although that patch was reverted, we quickly followed up with a fixed one at https://github.com/huggingface/transformers/pull/14415 , so the issue should now be resolved. If you're still encountering this issue after updating to the most recent version of Transformers, please let me know!",
"> @skbaur Although that patch was reverted, we quickly followed up with a fixed one at #14415 , so the issue should now be resolved. If you're still encountering this issue after updating to the most recent version of Transformers, please let me know!\r\n\r\nHi @Rocketknight1 , thanks for your reply! You are right, it does work when saving in the tensorflow format (not hdf5). This does solve the issue I was facing.\r\n\r\nWhat did not work for me was this (minimal example adapted from https://github.com/huggingface/transformers/issues/14430 ):\r\n\r\n```\r\nimport tensorflow as tf\r\nimport transformers\r\nimport sys\r\n\r\nprint(sys.version)\r\nprint(tf.__version__)\r\nprint(transformers.__version__)\r\n\r\nbert = transformers.TFBertModel(transformers.BertConfig())\r\ninput_ids = tf.keras.layers.Input(shape=(512,), dtype=tf.int32)\r\nmodel = tf.keras.Model(inputs=[input_ids], outputs=[bert(input_ids).last_hidden_state])\r\nmodel.compile()\r\n\r\n# tf.keras.models.save_model(model, \"model_tf\", save_format='tf') # This works\r\ntf.keras.models.save_model(model, \"model_h5.h5\", save_format='h5') # This fails\r\n```\r\n\r\nOutput:\r\n\r\n```\r\n3.6.9 (default, Oct 8 2020, 12:12:24) \r\n[GCC 8.4.0]\r\n2.4.4\r\n4.12.5\r\n```\r\n\r\nand then it fails with\r\n\r\n\r\n```\r\n~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py in get_network_config(network, serialize_layer_fn)\r\n 1347 filtered_inbound_nodes.append(node_data)\r\n 1348 \r\n-> 1349 layer_config = serialize_layer_fn(layer)\r\n 1350 layer_config['name'] = layer.name\r\n 1351 layer_config['inbound_nodes'] = filtered_inbound_nodes\r\n\r\n~/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py in serialize_keras_object(instance)\r\n 248 return serialize_keras_class_and_config(\r\n 249 name, {_LAYER_UNDEFINED_CONFIG_KEY: True})\r\n--> 250 raise e\r\n 251 serialization_config = {}\r\n 252 for key, item in config.items():\r\n\r\n~/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py in serialize_keras_object(instance)\r\n 243 name = get_registered_name(instance.__class__)\r\n 244 try:\r\n--> 245 config = instance.get_config()\r\n 246 except NotImplementedError as e:\r\n 247 if _SKIP_FAILED_SERIALIZATION:\r\n\r\n~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in get_config(self)\r\n 2247 \r\n 2248 def get_config(self):\r\n-> 2249 raise NotImplementedError\r\n 2250 \r\n 2251 @classmethod\r\n\r\nNotImplementedError: \r\n```\r\n",
"Hi @skbaur, your code runs fine for me! Here's my outputs:\r\n```\r\n3.9.6 (default, Aug 18 2021, 19:38:01) \r\n[GCC 7.5.0]\r\n2.6.0\r\n4.13.0.dev0\r\n```\r\nCan you try, in order:\r\n\r\n1) Installing transformers from master with `pip install git+https://github.com/huggingface/transformers.git`\r\n2) Updating TF to version 2.6 or 2.7\r\n\r\nand let me know if either of those fixes it for you?",
"> Hi @skbaur, your code runs fine for me! Here's my outputs:\r\n> \r\n> ```\r\n> 3.9.6 (default, Aug 18 2021, 19:38:01) \r\n> [GCC 7.5.0]\r\n> 2.6.0\r\n> 4.13.0.dev0\r\n> ```\r\n> \r\n> Can you try, in order:\r\n> \r\n> 1. Installing transformers from master with `pip install git+https://github.com/huggingface/transformers.git`\r\n> 2. Updating TF to version 2.6 or 2.7\r\n> \r\n> and let me know if either of those fixes it for you?\r\n\r\nOption 1. already seems to work (Installing transformers from master with pip install git+https://github.com/huggingface/transformers.git , but not updating TF).\r\n\r\nThe error reappears when downgrading back to transformers 4.12.5.",
"@skbaur It seems like one of the relevant PRs didn't make it into the release, in that case - please use the master version for now, and hopefully once 4.13 is released you can just use that instead!"
] | 1,589 | 1,638 | 1,636 | NONE | null | # 🐛 Bug
## Information
I am trying to build a Keras Sequential model, where, I use DistillBERT as a non-trainable embedding layer. The model complies and fits well, even predict method works. But when I want to save it using model.save(model.h5), It fails and shows the following error:
```
> ---------------------------------------------------------------------------
> NotImplementedError Traceback (most recent call last)
> <ipython-input-269-557c9cec7497> in <module>
> ----> 1 model.get_config()
>
> /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in get_config(self)
> 966 if not self._is_graph_network:
> 967 raise NotImplementedError
> --> 968 return copy.deepcopy(get_network_config(self))
> 969
> 970 @classmethod
>
> /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in get_network_config(network, serialize_layer_fn)
> 2117 filtered_inbound_nodes.append(node_data)
> 2118
> -> 2119 layer_config = serialize_layer_fn(layer)
> 2120 layer_config['name'] = layer.name
> 2121 layer_config['inbound_nodes'] = filtered_inbound_nodes
>
> /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py in serialize_keras_object(instance)
> 273 return serialize_keras_class_and_config(
> 274 name, {_LAYER_UNDEFINED_CONFIG_KEY: True})
> --> 275 raise e
> 276 serialization_config = {}
> 277 for key, item in config.items():
>
> /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py in serialize_keras_object(instance)
> 268 name = get_registered_name(instance.__class__)
> 269 try:
> --> 270 config = instance.get_config()
> 271 except NotImplementedError as e:
> 272 if _SKIP_FAILED_SERIALIZATION:
>
> /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in get_config(self)
> 965 def get_config(self):
> 966 if not self._is_graph_network:
> --> 967 raise NotImplementedError
> 968 return copy.deepcopy(get_network_config(self))
> 969
>
> NotImplementedError:
```
The language I am using the model in English.
The problem arises when using my own modified scripts: (give details below)
```
from transformers import DistilBertConfig, TFDistilBertModel, DistilBertTokenizer
max_len = 8
distil_bert = 'distilbert-base-uncased'
config = DistilBertConfig(dropout=0.2, attention_dropout=0.2)
config.output_hidden_states = False
transformer_model = TFDistilBertModel.from_pretrained(distil_bert, config = config)
input_word_ids = tf.keras.layers.Input(shape=(max_len,), dtype = tf.int32, name = "input_word_ids")
distill_output = transformer_model(input_word_ids)[0]
cls_out = tf.keras.layers.Lambda(lambda seq: seq[:, 0, :])(distill_output)
X = tf.keras.layers.BatchNormalization()(cls_out)
X = tf.keras.layers.Dense(256, activation='relu')(X)
X = tf.keras.layers.Dropout(0.2)(X)
X = tf.keras.layers.BatchNormalization()(X)
X = tf.keras.layers.Dense(128, activation='relu')(X)
X = tf.keras.layers.Dropout(0.2)(X)
X = tf.keras.layers.BatchNormalization()(X)
X = tf.keras.layers.Dense(64, activation='relu')(X)
X = tf.keras.layers.Dropout(0.2)(X)
X = tf.keras.layers.Dense(2)(X)
model = tf.keras.Model(inputs=input_word_ids, outputs=X)
for layer in model.layers[:3]:
layer.trainable = False
```
The tasks I am working on is my own dataset.
## To reproduce
Steps to reproduce the behavior:
1. Run the above code
2. You will get the error when saving the model as
```
model.save('model.h5')
```
You can get the same error if you try:
```
model.get_config()
```
**_An interesting observation:_**
if you save the model without specifying ".h5" like
```
model.save('./model')
```
it saves the model as TensorFlow saved_model format and creates folders (assets (empty), variables, and some index files). But if you try to load the model, it produces different errors related to the DistillBert/Bert. It may be due to some naming inconsistency (input_ids vs. inputs, see below) inside the DistillBert model.
```
new_model = tf.keras.models.load_model('./model)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/usr/local/lib/python3.7/site-packages/tensorflow/python/util/nest.py in assert_same_structure(nest1, nest2, check_types, expand_composites)
377 _pywrap_utils.AssertSameStructure(nest1, nest2, check_types,
--> 378 expand_composites)
379 except (ValueError, TypeError) as e:
ValueError: The two structures don't have the same nested structure.
First structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}
Second structure: type=TensorSpec str=TensorSpec(shape=(None, 8), dtype=tf.int32, name='inputs')
More specifically: Substructure "type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, 8), dtype=tf.int32, name='inputs')" is not
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-229-b46ed71fd9ad> in <module>
----> 1 new_model = tf.keras.models.load_model(keras_model_path)
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py in load_model(filepath, custom_objects, compile)
188 if isinstance(filepath, six.string_types):
189 loader_impl.parse_saved_model(filepath)
--> 190 return saved_model_load.load(filepath, compile)
191
192 raise IOError(
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in load(path, compile)
114 # TODO(kathywu): Add saving/loading of optimizer, compiled losses and metrics.
115 # TODO(kathywu): Add code to load from objects that contain all endpoints
--> 116 model = tf_load.load_internal(path, loader_cls=KerasObjectLoader)
117
118 # pylint: disable=protected-access
/usr/local/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py in load_internal(export_dir, tags, loader_cls)
602 loader = loader_cls(object_graph_proto,
603 saved_model_proto,
--> 604 export_dir)
605 root = loader.get(0)
606 if isinstance(loader, Loader):
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in __init__(self, *args, **kwargs)
186 self._models_to_reconstruct = []
187
--> 188 super(KerasObjectLoader, self).__init__(*args, **kwargs)
189
190 # Now that the node object has been fully loaded, and the checkpoint has
/usr/local/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py in __init__(self, object_graph_proto, saved_model_proto, export_dir)
121 self._concrete_functions[name] = _WrapperFunction(concrete_function)
122
--> 123 self._load_all()
124 self._restore_checkpoint()
125
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in _load_all(self)
213
214 # Finish setting up layers and models. See function docstring for more info.
--> 215 self._finalize_objects()
216
217 @property
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in _finalize_objects(self)
504 layers_revived_from_saved_model.append(node)
505
--> 506 _finalize_saved_model_layers(layers_revived_from_saved_model)
507 _finalize_config_layers(layers_revived_from_config)
508
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in _finalize_saved_model_layers(layers)
675 call_fn = _get_keras_attr(layer).call_and_return_conditional_losses
676 if call_fn.input_signature is None:
--> 677 inputs = infer_inputs_from_restored_call_function(call_fn)
678 else:
679 inputs = call_fn.input_signature[0]
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in infer_inputs_from_restored_call_function(fn)
919 for concrete in fn.concrete_functions[1:]:
920 spec2 = concrete.structured_input_signature[0][0]
--> 921 spec = nest.map_structure(common_spec, spec, spec2)
922 return spec
923
/usr/local/lib/python3.7/site-packages/tensorflow/python/util/nest.py in map_structure(func, *structure, **kwargs)
609 for other in structure[1:]:
610 assert_same_structure(structure[0], other, check_types=check_types,
--> 611 expand_composites=expand_composites)
612
613 flat_structure = [flatten(s, expand_composites) for s in structure]
/usr/local/lib/python3.7/site-packages/tensorflow/python/util/nest.py in assert_same_structure(nest1, nest2, check_types, expand_composites)
383 "Entire first structure:\n%s\n"
384 "Entire second structure:\n%s"
--> 385 % (str(e), str1, str2))
386
387
ValueError: The two structures don't have the same nested structure.
First structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}
Second structure: type=TensorSpec str=TensorSpec(shape=(None, 8), dtype=tf.int32, name='inputs')
More specifically: Substructure "type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, 8), dtype=tf.int32, name='inputs')" is not
Entire first structure:
{'input_ids': .}
Entire second structure:
.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect to have a normal saving and loading of the model.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform:
- Python version: 3.7.6
- Tensorflow version (CPU): 2.2.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4444/reactions",
"total_count": 13,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/huggingface/transformers/issues/4444/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4443/comments | https://api.github.com/repos/huggingface/transformers/issues/4443/events | https://github.com/huggingface/transformers/issues/4443 | 620,385,741 | MDU6SXNzdWU2MjAzODU3NDE= | 4,443 | Issues with the EncoderDecoderModel for sequence to sequence tasks | {
"login": "dbaxter240",
"id": 7192411,
"node_id": "MDQ6VXNlcjcxOTI0MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7192411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dbaxter240",
"html_url": "https://github.com/dbaxter240",
"followers_url": "https://api.github.com/users/dbaxter240/followers",
"following_url": "https://api.github.com/users/dbaxter240/following{/other_user}",
"gists_url": "https://api.github.com/users/dbaxter240/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dbaxter240/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dbaxter240/subscriptions",
"organizations_url": "https://api.github.com/users/dbaxter240/orgs",
"repos_url": "https://api.github.com/users/dbaxter240/repos",
"events_url": "https://api.github.com/users/dbaxter240/events{/privacy}",
"received_events_url": "https://api.github.com/users/dbaxter240/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Update -- after hours working with this code, I somehow only now realized that the PR I linked had updates to modeling_encoder_decoder.py that fixed the issues I'm describing in part 2 of my issue, which is why the example works there.\r\n\r\nI am still confused about part 1 (and 3) however, since it does not look like that PR changed anything about the input_ids for the decoder.",
"Yeah sorry, we changed the code base quite a bit since the PR you linked. So in general at the moment GPT2 cannot be used as a decoder because it is missing cross attention layers. \r\nThe only encoder-decoder model supported atm is a Bert-2-Bert model (this also included all models that inherit from BERT though: Roberta, ...). Do you currently use a Bert-2-Bert model?",
"Thanks Patrick. I did get a bert-2-bert model working for sequence to sequence but it really did not perform well on dummy tasks such as the one in the PR I linked. I am not sure I understand how a Bert-2-Bert model is supposed to work, isn't BERT an encoder architecture? How is it used as a decoder? (I was able to get the code working, but don't understand the theory behind a bert-2-bert model, and am wondering if that explains the poor performance with this model type.)",
"Can you link your code of your working Bert-2-Bert model here? Just a link to a GitHub repo or post it in the issue here would be great :-)",
"@patrickvonplaten My code was almost totally copied from that example in the pull request. I've been experimenting a bunch so it hasn't been constant, but I tried a bert2bert model again last night and while it looked like it was training properly etc, the model did not produce any results in the end.\r\n\r\nI've pushed the code to a new repo here that you can look at https://github.com/dbaxter240/bert2bertexample\r\n\r\nSince my original raising of this issue, I ended up cloning the transformers repo to manually make some of the changes that were in the pull request I linked. Since then, I've been able to get a GPT2 model to actually work reasonably well on the dummy problem, but Bert2Bert still fails.\r\n\r\nThe repo contains my modified copy of modeling_encoder_decoder.py so you can see what's going on. It's essentially a few of the same changes made to the file in the PR I linked.\r\n\r\nI'm not sure if this now falls out of your realm to investigate since I've modified the source code now, but the Bert2Bert model should be working exactly as it was prior to me tweaking the source code. I've been reading into your documentation on how to use BERT as a decoder, and as far as I can tell I'm (or the existing source code is) providing the expected parameters correctly.\r\n\r\nThanks!",
"Hi @dbaxter240,\r\nMultiple bugs were fixed in #4680. Can you please take a look whether this error persists?\r\n\r\nI think ideally you should not copy paste old encoder decoder code into another repo since the code quickly becomes outdated and is hard to debug for us. The `EncoderDecoderModel` is still a very premature feature of this library and prone to change quickly. It would be great if you could try to use as much up-to-date code of this library as possible.\r\n\r\nI'm very sorry, for only finding this big bug now! It seems like you have invested quite a lot of energy into your code. I will soon (~2 weeks) open-source a notebook giving a nice example of how the `EncoderDecoderModel` can be leverage to fine-tune a Bert2Bert model. \r\n\r\nAlso note that this PR #3402 is rather outdated and since we don't provide `EncoderDecoderModel` support for GPT2 at the moment still not possible.\r\n\r\n",
"@patrickvonplaten Thank you very much for your time with this!\r\n\r\nI haven't had too much time to play with the code including your change yet, but it looks like there are some differences in the behavior, so perhaps I will have better results once I'm able to put more time into training up the model!\r\n\r\nI think the main question/issue I'm still hitting in my limited time toying with it is my question #1 from above. A main reason behind me trying to modify the source code originally was the required parameter of either decoder_input_ids or decoder_input_embeds, and not totally understanding what to provide there (at training vs. evaluation time.)\r\n\r\nI'd taken a hint from the PR I'd mentioned which just passed the encoder hidden states as the decoder_input_embeds, so that's what I was trying to achieve. Using the code including your change, those parameters are required again and I can't quite use that approach.\r\n\r\nIt looks like the encoder hidden states **are** being passed into the decoder in the EncoderDecoderModel.forward() method via the encoder_hidden_states parameter, so that looks good, but then as mentioned in question 1 I'm not sure I understand what the expected input for decoder_input_ids or decoder_input_embeds is. Is the idea that you provide decoder_input_ids as the expected output token ids (shifted right with a PAD token) during training so the model has the expected output while training, but then completely mask those tokens during evaluation so your model can't \"see the answer\"? \r\n\r\nI will keep playing with it to see if I can figure that piece out, but if you have any tips or input I would greatly appreciate it!\r\n\r\nThank you again for your help with this!",
"Maybe https://github.com/huggingface/transformers/issues/4647#issuecomment-636306986 might help as well here",
"@patrickvonplaten Thanks Patrick, that did clear up a fair bit for me (especially regarding not needing to shift the tokens, but I'm still not sure I understand the answer to my main question in #1 above.\r\n\r\nIn the issue you linked, you are providing the target sequence (converted to token ids) as decoder_input_ids for training. This makes sense to me, since the underlying code is shifting the tokens right by one for us. What I still don't understand is what to provide as the decoder_input_ids when doing evaluation. \r\n\r\n1. If I do that same thing with my test set (feed the target sequence as decoder_input_ids), then I'm just basically feeding the answer to my model. I tested that it is in fact \"cheating\" and using this information by putting some crazy things in my test set which the model managed to classify accurately (it definitely should not have been able to.)\r\n\r\n2. If I instead feed the source sequence converted to token ids during evaluation (as I've seen in some documentation) then I'm giving my model different information during training and evaluation.\r\n\r\n3. If I try to not provide any decoder_input_ids during evaluation (after calling model.eval() ), then I get a \"ValueError: You have to specify either input_ids or input_embeds.\"\r\n\r\nMy expectation was that during training, I would feed it the target sequence as decoder_input_ids and then during evaluation, I would not input decoder_input_ids and the model would only use the previous tokens it had generated. If I provide the target sequence as decoder_input_ids during training, what am I supposed to be providing as decoder_input_ids during evaluation?\r\n\r\nThank you again for your help!",
"Disregard the above comment -- as you hinted above I was confusing myself by looking at some outdated examples :) \r\n\r\nI'm now generating my predictions with \r\n\r\n`decoder_predictions = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)`\r\n\r\nI haven't been able to get too great of results yet, but haven't been able to dig into why with recent updates. I will be continuing to test over the next few days and will let you know if the bug fixes you mention above made the difference!",
"> I'm very sorry, for only finding this big bug now! It seems like you have invested quite a lot of energy into your code. I will soon (~2 weeks) open-source a notebook giving a nice example of how the `EncoderDecoderModel` can be leverage to fine-tune a Bert2Bert model.\r\n\r\n@patrickvonplaten Any updates on the notebook or any other examples to fine-tune a Bert2Bert model? I find myself unsure of how to go about it and the examples would be a good starting point to understand the same. I have picked up some things from other issues (https://github.com/huggingface/transformers/issues/4647) regarding this but not sure if I am doing the right thing.\r\n\r\n\r\n",
"@mitesh-mutha For what it's worth, I was able to get a model up and running with pretty reasonable results going off of the code linked in the last comment of that work item. Not sure if you when an official example will be available, but that code helped me a lot if you haven't looked at that code much yet.",
"@mitesh-mutha - very bad time estimation from my part again :D Next week (promise!), I will start working on notebook training / fine-tuning Bert2Bert on summarization. But the core code should not differ very much from the code I posted in the other comment.",
"@patrickvonplaten Hello Patrick. I have tried to fine tune a Bert2Bert model. The input to the model is a string of concatenated sentences and the output are the sentences reformulated in a paragraph. So far I implemented the model in Colab but the results are not that good. Here is my working code https://colab.research.google.com/drive/19G_wRPsc6FvXxxeoQZ3WzYaEkwm9YByv?usp=sharing . \r\nIt would be so nice if you can make a small tutorial on how to fine-tune a Bert2Bert model with a good result, such that I can find out where the problem lies in the code. Thank you :)",
"Great! Thanks, @patrickvonplaten! \r\nI did look into the code that you and @dbaxter240 have mentioned. I implemented a similar thing, however, I am not getting great results. My code is similar to what @iliemihai has provided. Just for a quick try, I tried to fine-tune it to generate the same sentence but, as I mentioned, results were not good for me.\r\nLooking at a sample tutorial or example would help me iron out any problems I might have in my code. \r\n",
"Hey, as usual I'm very late on my own time timelines, but I started working on a Bert2Bert tutorial for summarization yesterday :-). \r\nIt's still work in progress, but it will be ready by next week. \r\n\r\nThe code should work as it is - I have to fine-tune the hyper parameters and want to add some nicer metrics to measure the performance during training.\r\n\r\nIf you want to follow the work live :D here is the google colab I'm working on at the moment:\r\nhttps://colab.research.google.com/drive/13RXRepDN7bOJgdxPuIziwbcN8mpJLyNo?usp=sharing\r\n\r\n\r\n@iliemihai, one thing I can directly see from your notebook is that I think you are not masking the loss of padded tokens so that the loss of all pad token id is back propagated through the network.\r\n\r\nSince your `decoder_input_ids` are in PyTorch I think you can do the following for your `labels`:\r\n\r\n```python\r\nlabels = decoder_input_ids.clone()\r\n# mask loss for padding\r\nlabels[labels == tokenizer.pad_token_id] = -100\r\n```",
"Thank you @patrickvonplaten I will watch into it. Think that I might have to tune the hyperparameters. Also my dataset is small (1000-2000 pairs of paragraphs with under 128 words) compared to other datasets.",
"Hey guys, small update from my side. \r\nI have trained a Bert2Bert on summarization (without real hyper parameter search) and the results are quite promising.\r\nYou can check it out here: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16\r\n\r\nThe training code to reproduce the results and some examples can be found in the model card. \r\n\r\nHope this helps for now. Will be off for two weeks, but plan on a bit more sophisticated training + clean notebook and docs for the `EncoderDecoder` framework with @sshleifer afterward. ",
"Hi! \r\nI was studying this tutorial: [https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16?text=The+goal+of+life+is+%3Cmask%3E.](url) and I noticed that on the GPT2 tokenizer the pad_token, the unk_token, the bos_token and the eos_token are set as \"<|endoftext|>\". My question is why did you use \"<|endoftext|>\" for padding and unknown token?\r\nThank you in advance.",
"Hmm, there is no real reason behind it. Both `unk_token` and `pad_token` are not really important. On the pad_token the loss is never calculated and it does not matter for inference with batch_size=1. The unk_token does not really matter",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,606 | 1,606 | NONE | null | # ❓ Questions & Help
I have been attempting with various models to try to build an encoder-decoder, sequence to sequence transformer model. For the most part, I have been using BERT (bert-base-cased), but have encountered issues with various models.
The model is intended for an English to English sequence to sequence problem.
For reference, I had been trying to use the seq2seq example in this pull request as a template :
https://github.com/huggingface/transformers/pull/3402
But have needed to make some modifications to it to account for other recent changes in the EncoderDecoderModel class.
I have a hit a few main issues, three are posted here. I think at least some of them are possibly bugs in the EncoderDecoderModel code.
1. A recent commit made some major changes to the forward method, and I've been hitting issues with the section that defines the decoder_outputs (around line 253 of modeling_encoder_decoder.py.) The example in the pull request I linked does not provide decoder_input_ids when setting up the model, but that is now required by this code in your recent commit. When training, I modified the code to provide decoder_token_ids as the target tokens shifted one to the right with a PAD token in front, as described in various papers. However, I don't understand why this is required when in eval mode -- shouldn't the model not have any decoder input tokens when in test/eval mode, and only be able to see what the previous tokens it actually output were? I don't understand what I'm supposed to provide as decoder_input_ids when in evaluation mode, and haven't been able to find documentation on it.
The code I'm currently using for training looks something like this :
```
for step, batch in enumerate(epoch_iterator):
# Skip past any already trained steps if resuming training
if steps_trained_in_current_epoch > 0:
steps_trained_in_current_epoch -= 1
continue
model.train()
batch = tuple(t.to(args.device) for t in batch)
input_ids, output_ids, input_mask, output_mask, _, decoder_ids = batch
# add other inputs here, including kwargs
**inputs = {"input_ids": input_ids, "attention_mask": input_mask, 'decoder_input_ids': decoder_ids}**
# The output tuple structure depends on the model used and the arguments invoked
# For BERT-type models, this is
# decoder_predictions, encoded_embeddings, encoded_attention_mask = model(**inputs)
# For GPT2-type models, this at least starts with the decoder predictions
# See the EncoderDecoderModel class for more details
**output = model(**inputs)**
```
More context is given in the linked pull request, since again this is being copied from there. The initial pull request does not provide the 'decoder_input_ids' parameter, but it seems that is now required. My code is similar in eval mode, but without decoder_input_ids, and this code fails :
```
**for batch in tqdm(eval_dataloader, desc="Evaluating"):
batch = tuple(t.to(args.device) for t in batch)
input_ids, output_ids, input_mask, output_mask, _, decoder_ids = batch
with torch.no_grad():
inputs = {"input_ids": input_ids, "attention_mask": input_mask}
# The output tuple structure depends on the model used and the arguments invoked
# For BERT-type models, this is
# decoder_predictions, encoded_embeddings, encoded_attention_mask = model(**inputs)
# For GPT2-type models, this at least starts with the decoder predictions
# See the EncoderDecoderModel class for more details
output = model(**inputs)**
```
This code fails in modeling_encoder_decoder, line 283 with
ValueError: You have to specify either input_ids or inputs_embeds
2. The pull request uses a GPT2 model as an example, but that no longer works because the code mentioned from #1 requires some parameters like encoder_hidden_states that GPT2 does not take at initialization. When I try to create a GPT2 model I get exceptions regarding this extra parameter. In other words, when I switch from a bert-bert model to a gpt2-gpt2 model, the code posted above fails in the "forward" method of the EncoderDecoderModel (line 283 of modeling_encoder_decoder) because "encoder_hidden_states" is an unexpected param for GPT2. Is this intended / is GPT2 no longer supported for an encoder decoder architecture using this code?
3. This one is just more of a general question... but since I'm posting the above 2 as issues anyways, I figured I'd add it here in case anybody can clarify and save a separate issue being created..
I believe I'm doing this part correctly, but it was not handled in the example code so want to verify if possible... For the attention mask for the decoder, during training all non-PAD tokens are expected to be unmasked, and during evaluation no mask should be provided and a default causal mask will be used, right?
@patrickvonplaten , tagging you in this issue as requested.
Thank you for your time!! Let me know if you need more code, again my code is 95% or so identical to the run_seq2seq.py example in the linked PR, just with some changes to account for recent modifications in modeling_encoder_decoder.py | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4443/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4442/comments | https://api.github.com/repos/huggingface/transformers/issues/4442/events | https://github.com/huggingface/transformers/pull/4442 | 620,316,295 | MDExOlB1bGxSZXF1ZXN0NDE5NTg3NDI1 | 4,442 | [Communtiy notebooks] Fine-tuning / Training | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=h1) Report\n> Merging [#4442](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9ece8233d584cdc2eeae5165dd3329328fae328&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4442 +/- ##\n==========================================\n+ Coverage 78.14% 78.16% +0.01% \n==========================================\n Files 120 120 \n Lines 20087 20087 \n==========================================\n+ Hits 15697 15701 +4 \n+ Misses 4390 4386 -4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.51% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=footer). Last update [d9ece82...8ae2c5c](https://codecov.io/gh/huggingface/transformers/pull/4442?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Actually closing this, I think community notebooks should only be added in a single place."
] | 1,589 | 1,589 | 1,589 | MEMBER | null | Proposal of how notebooks for training could be added. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4442/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4442",
"html_url": "https://github.com/huggingface/transformers/pull/4442",
"diff_url": "https://github.com/huggingface/transformers/pull/4442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4442.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4441/comments | https://api.github.com/repos/huggingface/transformers/issues/4441/events | https://github.com/huggingface/transformers/pull/4441 | 620,308,971 | MDExOlB1bGxSZXF1ZXN0NDE5NTgxNTAx | 4,441 | [Community notebooks] General notebooks | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=h1) Report\n> Merging [#4441](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/590adb130be8e99eb638bb22136dda537b2da71d&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4441 +/- ##\n=======================================\n Coverage 78.14% 78.15% \n=======================================\n Files 120 120 \n Lines 20087 20087 \n=======================================\n+ Hits 15697 15698 +1 \n+ Misses 4390 4389 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=footer). Last update [590adb1...df8c3a1](https://codecov.io/gh/huggingface/transformers/pull/4441?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@patrickvonplaten This sounds good! Having a separate table for community models makes sense. "
] | 1,589 | 1,589 | 1,589 | MEMBER | null | A proposal how we could link community notebooks.
I'm using the awesome notebook of @patil-suraj (`nlp` + `Trainer` + `transformers` :-)) as an example of how community models can be added.
@patil-suraj - could you maybe review the PR and see whether it's ok for you? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4441/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4441",
"html_url": "https://github.com/huggingface/transformers/pull/4441",
"diff_url": "https://github.com/huggingface/transformers/pull/4441.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4441.patch",
"merged_at": 1589826238000
} |
https://api.github.com/repos/huggingface/transformers/issues/4440 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4440/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4440/comments | https://api.github.com/repos/huggingface/transformers/issues/4440/events | https://github.com/huggingface/transformers/issues/4440 | 620,292,238 | MDU6SXNzdWU2MjAyOTIyMzg= | 4,440 | Reformer training error | {
"login": "Brock007",
"id": 53123440,
"node_id": "MDQ6VXNlcjUzMTIzNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/53123440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Brock007",
"html_url": "https://github.com/Brock007",
"followers_url": "https://api.github.com/users/Brock007/followers",
"following_url": "https://api.github.com/users/Brock007/following{/other_user}",
"gists_url": "https://api.github.com/users/Brock007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Brock007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Brock007/subscriptions",
"organizations_url": "https://api.github.com/users/Brock007/orgs",
"repos_url": "https://api.github.com/users/Brock007/repos",
"events_url": "https://api.github.com/users/Brock007/events{/privacy}",
"received_events_url": "https://api.github.com/users/Brock007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Reformer does not support `mlm` training at the moment. Please make sure you use `lm` training :-) ",
"@patrickvonplaten thanks! Is there a plan to support mlm in the near future? \r\n\r\nI assume I can just remove the mlm flag to do lm training, right? How can I tell the script to pad the input sequences to a certain length as reformer requires the sequence length to be a multiple of least common multiple chunk_length? Thanks!",
"Yes, there are plans to add a `MaskedLM` version for Reformer. I will release a notebook this week (probs on Friday) on how to train the Reformer :-) "
] | 1,589 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
When training a Reformer model from scratch, I got the following error:
**TypeError: forward() got an unexpected keyword argument 'masked_lm_labels'**
## Information
I am trying to train a Reformer model from scratch on English documents. My data is one document per line:
The problem arises when using:
* [ x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Split my training documents on '.' to create a corpus of sentences. Use Google SentencePiece script to train a tokenization model.
2. Use run_language_modeling.py with --mlm --tokenizer_name=path/to/pretrained_SP_tokenizer to train the model.
3. My config.json looks like this:
```
{
"architectures": [
"ReformerModelWithLMHead"
],
"model_type": "reformer",
"vocab_size": 32000
}
```
File "/Users/a9dvzzz/.virtualenvs/cf-mlc/lib/python3.7/site-packages/transformers/trainer.py", line 506, in _training_step
outputs = model(**inputs)
File "/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'masked_lm_labels'
## Expected behavior
The training process produces a saved Reformer model.
## Environment info
- `transformers` version: 2.9.1
- Platform: Darwin-19.4.0-x86_64-i386-64bit
- Python version: 3.7.4
- PyTorch version (GPU?): 1.3.0 (False)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: no, but it doesn't seem to matter, both failed.
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4440/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4439 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4439/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4439/comments | https://api.github.com/repos/huggingface/transformers/issues/4439/events | https://github.com/huggingface/transformers/pull/4439 | 620,285,968 | MDExOlB1bGxSZXF1ZXN0NDE5NTYzMjcw | 4,439 | Avoid abort due to missing paths in case of '--save_total_limit' argument | {
"login": "TJKlein",
"id": 7634373,
"node_id": "MDQ6VXNlcjc2MzQzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7634373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TJKlein",
"html_url": "https://github.com/TJKlein",
"followers_url": "https://api.github.com/users/TJKlein/followers",
"following_url": "https://api.github.com/users/TJKlein/following{/other_user}",
"gists_url": "https://api.github.com/users/TJKlein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TJKlein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TJKlein/subscriptions",
"organizations_url": "https://api.github.com/users/TJKlein/orgs",
"repos_url": "https://api.github.com/users/TJKlein/repos",
"events_url": "https://api.github.com/users/TJKlein/events{/privacy}",
"received_events_url": "https://api.github.com/users/TJKlein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | Checkpoint path will be deleted when using --save_total_limit. torch.save() would not be able to store and abort. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4439/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4439",
"html_url": "https://github.com/huggingface/transformers/pull/4439",
"diff_url": "https://github.com/huggingface/transformers/pull/4439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4439.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4438 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4438/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4438/comments | https://api.github.com/repos/huggingface/transformers/issues/4438/events | https://github.com/huggingface/transformers/issues/4438 | 620,270,635 | MDU6SXNzdWU2MjAyNzA2MzU= | 4,438 | BERT Fine-tuning problems | {
"login": "laetokang",
"id": 49485939,
"node_id": "MDQ6VXNlcjQ5NDg1OTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49485939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laetokang",
"html_url": "https://github.com/laetokang",
"followers_url": "https://api.github.com/users/laetokang/followers",
"following_url": "https://api.github.com/users/laetokang/following{/other_user}",
"gists_url": "https://api.github.com/users/laetokang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laetokang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laetokang/subscriptions",
"organizations_url": "https://api.github.com/users/laetokang/orgs",
"repos_url": "https://api.github.com/users/laetokang/repos",
"events_url": "https://api.github.com/users/laetokang/events{/privacy}",
"received_events_url": "https://api.github.com/users/laetokang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Is your dataset following the SQuAD dataset format? It seems that what's making it crash is that there's no `title` entry.\r\n\r\nYou can take a look at how SQuAD is setup [here](https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hello. I'm going to do a fine-tuning of BERT-base-uncased using the QA Dataset I made. However, the following error occurs: Could you tell me how to solve this problem?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
```
Traceback (most recent call last):
File "./examples/question-answering/run_squad.py", line 830, in <module>
main()
File "./examples/question-answering/run_squad.py", line 768, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)
File "./examples/question-answering/run_squad.py", line 452, in load_and_cache_examples
examples = processor.get_train_examples(args.data_dir, filename=args.train_file)
File "/home/address/anaconda3/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 525, in get_train_examples
return self._create_examples(input_data, "train")
File "/home/address/anaconda3/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 552, in _create_examples
title = entry["title"]
TypeError: string indices must be integers
Traceback (most recent call last):
File "/home/address/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/address/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/address/anaconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 253, in <module>
main()
File "/home/address/anaconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 249, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/address/anaconda3/bin/python', '-u', './examples/question-answering/run_squad.py', '--local_rank=1', '--model_type', 'bert', '--model_name_or_path', 'bert-base-uncased', '--do_train', '--do_eval', '--train_file', '/home/address/Desktop/address/train_split.json', '--predict_file', '/home/address/Desktop/address/val_split.json', '--learning_rate', '3e-5', '--num_train_epochs', '2', '--max_seq_length', '384', '--doc_stride', '128', '--output_dir', '../models/wwm_uncased_finetuned_squad/', '--per_gpu_eval_batch_size=3', '--per_gpu_train_batch_size=3']' returned non-zero exit status 1.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4438/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4437 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4437/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4437/comments | https://api.github.com/repos/huggingface/transformers/issues/4437/events | https://github.com/huggingface/transformers/pull/4437 | 620,227,207 | MDExOlB1bGxSZXF1ZXN0NDE5NTE2MjMx | 4,437 | Added model cards for Romanian BERT models | {
"login": "dumitrescustefan",
"id": 22746816,
"node_id": "MDQ6VXNlcjIyNzQ2ODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/22746816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dumitrescustefan",
"html_url": "https://github.com/dumitrescustefan",
"followers_url": "https://api.github.com/users/dumitrescustefan/followers",
"following_url": "https://api.github.com/users/dumitrescustefan/following{/other_user}",
"gists_url": "https://api.github.com/users/dumitrescustefan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dumitrescustefan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dumitrescustefan/subscriptions",
"organizations_url": "https://api.github.com/users/dumitrescustefan/orgs",
"repos_url": "https://api.github.com/users/dumitrescustefan/repos",
"events_url": "https://api.github.com/users/dumitrescustefan/events{/privacy}",
"received_events_url": "https://api.github.com/users/dumitrescustefan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Awesome, thanks for sharing\r\n\r\nhttps://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1\r\n\r\nI've added a filter for 🇷🇴 here: https://huggingface.co/models?filter=romanian"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Added model card for ``dumitrescustefan/bert-base-romanian-cased-v1`` and ``dumitrescustefan/bert-base-romanian-uncased-v1`` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4437/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4437",
"html_url": "https://github.com/huggingface/transformers/pull/4437",
"diff_url": "https://github.com/huggingface/transformers/pull/4437.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4437.patch",
"merged_at": 1589842137000
} |
https://api.github.com/repos/huggingface/transformers/issues/4436 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4436/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4436/comments | https://api.github.com/repos/huggingface/transformers/issues/4436/events | https://github.com/huggingface/transformers/pull/4436 | 620,226,417 | MDExOlB1bGxSZXF1ZXN0NDE5NTE1NTYy | 4,436 | [T5 fp16] Fix fp16 in T5 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Bart doesn't use this method yet, but LGTM!\r\n\r\nYeah, I noticed that as well - it's Bert that is using it."
] | 1,589 | 1,589 | 1,589 | MEMBER | null | This PR fixes the issue: #4287.
- A test for T5 is added.
- The function self.invert_attention_mask included a if statement now so that no errors will occur when using the function in `fp16` mode. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4436/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4436",
"html_url": "https://github.com/huggingface/transformers/pull/4436",
"diff_url": "https://github.com/huggingface/transformers/pull/4436.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4436.patch",
"merged_at": 1589815558000
} |
https://api.github.com/repos/huggingface/transformers/issues/4435 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4435/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4435/comments | https://api.github.com/repos/huggingface/transformers/issues/4435/events | https://github.com/huggingface/transformers/pull/4435 | 620,202,426 | MDExOlB1bGxSZXF1ZXN0NDE5NDk1NzAy | 4,435 | added model card for german-sentiment-bert | {
"login": "oliverguhr",
"id": 3495355,
"node_id": "MDQ6VXNlcjM0OTUzNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3495355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverguhr",
"html_url": "https://github.com/oliverguhr",
"followers_url": "https://api.github.com/users/oliverguhr/followers",
"following_url": "https://api.github.com/users/oliverguhr/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverguhr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverguhr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverguhr/subscriptions",
"organizations_url": "https://api.github.com/users/oliverguhr/orgs",
"repos_url": "https://api.github.com/users/oliverguhr/repos",
"events_url": "https://api.github.com/users/oliverguhr/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverguhr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Awesome model card. Link: https://huggingface.co/oliverguhr/german-sentiment-bert",
"Thanks a lot :+1: "
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | I added a description for my german sentiment model. If you have any feedback or questions please let me know. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4435/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4435",
"html_url": "https://github.com/huggingface/transformers/pull/4435",
"diff_url": "https://github.com/huggingface/transformers/pull/4435.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4435.patch",
"merged_at": 1589841882000
} |
https://api.github.com/repos/huggingface/transformers/issues/4434 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4434/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4434/comments | https://api.github.com/repos/huggingface/transformers/issues/4434/events | https://github.com/huggingface/transformers/issues/4434 | 620,149,232 | MDU6SXNzdWU2MjAxNDkyMzI= | 4,434 | albertModel object has no attribute bias | {
"login": "Pydataman",
"id": 17594431,
"node_id": "MDQ6VXNlcjE3NTk0NDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/17594431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pydataman",
"html_url": "https://github.com/Pydataman",
"followers_url": "https://api.github.com/users/Pydataman/followers",
"following_url": "https://api.github.com/users/Pydataman/following{/other_user}",
"gists_url": "https://api.github.com/users/Pydataman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pydataman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pydataman/subscriptions",
"organizations_url": "https://api.github.com/users/Pydataman/orgs",
"repos_url": "https://api.github.com/users/Pydataman/repos",
"events_url": "https://api.github.com/users/Pydataman/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pydataman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! What is the `xxx` model? Is it one of our pre-trained checkpoints, is it an original TF checkpoint? Did this error happen in previous versions? Would you mind giving a bit of context?",
"Hi, I tried this model https://storage.googleapis.com/albert_models/albert_base_zh.tar.gz , which is new model release by Google on 2019 Dec. 30 on Albert's official [github page](https://github.com/google-research/albert). And I encountered the same error. My code is:\r\n```Python\r\nfrom transformers import AlbertModel, AlbertConfig\r\nconfig = json.load(open('albert_base/albert_config.json'))\r\nconfig = AlbertConfig(**config)\r\nmodel = AlbertModel.from_pretrained('albert_base/model.ckpt-best', config=config, from_tf=True)\r\n```\r\n\r\n\r\nThank you!",
"Thanks, I'll take a look.",
"That's because you're trying to load a checkpoint without first converting it. You should run the conversion script under `src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py`:\r\n\r\n```\r\npython src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py \\\r\n --tf_checkpoint_path=$PATH_TO_ALBERT/albert_base_chinese/model.ckpt-best \\\r\n --albert_config_file=$PATH_TO_ALBERT//albert_base_chinese/albert_config.json \\\r\n --pytorch_dump_path=$PATH_TO_ALBERT/albert_chinese.pt \r\n```",
"It works, thank you!\r\n\r\n"
] | 1,589 | 1,593 | 1,593 | NONE | null | transformers version:2.9.0
model = AlbertModel.from_pretrained("xxx", from_tf=True)
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'key', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
Initialize PyTorch weight ['albert', 'pooler', 'bias'] from bert/pooler/dense/bias
Initialize PyTorch weight ['albert', 'pooler', 'kernel'] from bert/pooler/dense/kernel
Traceback (most recent call last):
File "d:\python_workbase\project\transformers_test\test.py", line 7, in <module>
model = AlbertModel.from_pretrained("D:\work\model\\albert_tiny_zh_google", from_tf=True)
File "D:\Programs\Python\Python37\lib\site-packages\transformers\modeling_utils.py", line 640, in from_pretrained
model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index'
File "D:\Programs\Python\Python37\lib\site-packages\transformers\modeling_albert.py", line 139, in load_tf_weights_in_albert
pointer = getattr(pointer, "bias")
File "D:\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 591, in __getattr__
type(self).__name__, name))
AttributeError: 'AlbertModel' object has no attribute 'bias' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4434/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4433 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4433/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4433/comments | https://api.github.com/repos/huggingface/transformers/issues/4433/events | https://github.com/huggingface/transformers/pull/4433 | 620,053,168 | MDExOlB1bGxSZXF1ZXN0NDE5MzczNTA1 | 4,433 | Create README.md | {
"login": "mar-muel",
"id": 19345805,
"node_id": "MDQ6VXNlcjE5MzQ1ODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/19345805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mar-muel",
"html_url": "https://github.com/mar-muel",
"followers_url": "https://api.github.com/users/mar-muel/followers",
"following_url": "https://api.github.com/users/mar-muel/following{/other_user}",
"gists_url": "https://api.github.com/users/mar-muel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mar-muel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mar-muel/subscriptions",
"organizations_url": "https://api.github.com/users/mar-muel/orgs",
"repos_url": "https://api.github.com/users/mar-muel/repos",
"events_url": "https://api.github.com/users/mar-muel/events{/privacy}",
"received_events_url": "https://api.github.com/users/mar-muel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Also it could be interesting to convert and also upload PyTorch weights",
"I've tried - but the script unfortunately only works for TF 1.4 - would be glad share though! "
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4433/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4433/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4433",
"html_url": "https://github.com/huggingface/transformers/pull/4433",
"diff_url": "https://github.com/huggingface/transformers/pull/4433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4433.patch",
"merged_at": 1589841695000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4432 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4432/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4432/comments | https://api.github.com/repos/huggingface/transformers/issues/4432/events | https://github.com/huggingface/transformers/pull/4432 | 620,028,633 | MDExOlB1bGxSZXF1ZXN0NDE5MzUzNDkz | 4,432 | Tag onnx export tests as slow | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | TensorFlow ONNX export test is very slow as it makes many many optimizations passes of the graph.
This PR marks both PyTorch & TensorFlow as slow, and keeps all the others (fast) as non-slow. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4432/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4432",
"html_url": "https://github.com/huggingface/transformers/pull/4432",
"diff_url": "https://github.com/huggingface/transformers/pull/4432.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4432.patch",
"merged_at": 1589808282000
} |
https://api.github.com/repos/huggingface/transformers/issues/4431 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4431/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4431/comments | https://api.github.com/repos/huggingface/transformers/issues/4431/events | https://github.com/huggingface/transformers/pull/4431 | 620,022,817 | MDExOlB1bGxSZXF1ZXN0NDE5MzQ4NzYz | 4,431 | Adding optimizations block from ONNXRuntime. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @tianleiwu ",
"use_external_data_format has some side-effect we'd like to mitigate here, I set to False by default and let the possibility for the user to override through CLI args."
] | 1,589 | 1,589 | 1,589 | MEMBER | null | cc @EmmaNingMS | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4431/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4431",
"html_url": "https://github.com/huggingface/transformers/pull/4431",
"diff_url": "https://github.com/huggingface/transformers/pull/4431.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4431.patch",
"merged_at": 1589826753000
} |
https://api.github.com/repos/huggingface/transformers/issues/4430 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4430/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4430/comments | https://api.github.com/repos/huggingface/transformers/issues/4430/events | https://github.com/huggingface/transformers/issues/4430 | 620,009,014 | MDU6SXNzdWU2MjAwMDkwMTQ= | 4,430 | 🐛 Weird learning rate with TPU Trainer | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): **ELECTRA**
Language I am using the model on (English, Chinese ...): **English**
The problem arises when using:
* [ ] the official example scripts:
* [x] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x] my own task or dataset: CNN/DM
## To reproduce
I use `run_glue.py` as example to build a training script for TPU, with the Trainer API. My task is sequence classification with CNN/DM dataset.
I initialized the Trainer with following optimizer / scheduler :
```python
optimizer = AdamW(optimizer_grouped_parameters, lr=training_args.learning_rate, eps=training_args.adam_epsilon)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=training_args.warmup_steps, num_training_steps=287113)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=cnn_dm["train"] if training_args.do_train else None,
eval_dataset=cnn_dm["validation"],
data_collator=MyCollator(),
prediction_loss_only=True,
optimizers=(optimizer, scheduler),
)
```
Now, the training procedure is working : the code run fine on 8 TPU core.
**But the loss is not decreasing.**
After looking into Tensorboards logs, I found the learning rate to be very weird :

A few points to note :
* I specified a learning rate of **1e-4** (with the command argument `--learning_rate 1e-4`), but as you can see, the maximum value for the learning rate is **3.5e-6**.
* The shape of learning rate is not what I expected : After warmup, learning rate is supposed to decrease linearly, but instead, it stays fixed.
I don't know why the learning rate is being like this. Any idea what I might be doing wrong ?
_I can't share my notebook, but this seems to be the exact same issue with the official script `run_glue.py`, as described in #4358_
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: **2.9.1**
- Platform: **Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic**
- Python version: **3.6.9**
- PyTorch version (GPU?): **1.6.0a0+176174a (False)**
- Tensorflow version (GPU?): **2.2.0 (False)**
- Using GPU in script?: **No**
- Using distributed or parallel set-up in script?: **Yes : `xla_spawn.py`**
@julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4430/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4429/comments | https://api.github.com/repos/huggingface/transformers/issues/4429/events | https://github.com/huggingface/transformers/issues/4429 | 619,974,661 | MDU6SXNzdWU2MTk5NzQ2NjE= | 4,429 | mbart config.json missing | {
"login": "WeiliangGuo",
"id": 12620778,
"node_id": "MDQ6VXNlcjEyNjIwNzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/12620778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WeiliangGuo",
"html_url": "https://github.com/WeiliangGuo",
"followers_url": "https://api.github.com/users/WeiliangGuo/followers",
"following_url": "https://api.github.com/users/WeiliangGuo/following{/other_user}",
"gists_url": "https://api.github.com/users/WeiliangGuo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WeiliangGuo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WeiliangGuo/subscriptions",
"organizations_url": "https://api.github.com/users/WeiliangGuo/orgs",
"repos_url": "https://api.github.com/users/WeiliangGuo/repos",
"events_url": "https://api.github.com/users/WeiliangGuo/events{/privacy}",
"received_events_url": "https://api.github.com/users/WeiliangGuo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"fairseq doesn't use config.json.\r\n\r\n`BartConfig.from_pretrained('mbart-large-en-ro').to_json_file('config.json')` gets the config.json for English-Romanian, which is the only mbart checkpoint that's usable in this repository."
] | 1,589 | 1,589 | 1,589 | NONE | null | I downloaded mbart from fairseq, there are dict.txt, model.pt, sentence.bpe.model in it but no config.json, where can we get it (and other necessary missing files)?
I use mbart as pretrained model.
Is bart-large trained on multilingual data?
Anyone has compared bart with t5? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4429/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4428/comments | https://api.github.com/repos/huggingface/transformers/issues/4428/events | https://github.com/huggingface/transformers/issues/4428 | 619,930,847 | MDU6SXNzdWU2MTk5MzA4NDc= | 4,428 | How to extract the best candidate after token classification? | {
"login": "renjithsasidharan",
"id": 4523060,
"node_id": "MDQ6VXNlcjQ1MjMwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4523060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renjithsasidharan",
"html_url": "https://github.com/renjithsasidharan",
"followers_url": "https://api.github.com/users/renjithsasidharan/followers",
"following_url": "https://api.github.com/users/renjithsasidharan/following{/other_user}",
"gists_url": "https://api.github.com/users/renjithsasidharan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/renjithsasidharan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renjithsasidharan/subscriptions",
"organizations_url": "https://api.github.com/users/renjithsasidharan/orgs",
"repos_url": "https://api.github.com/users/renjithsasidharan/repos",
"events_url": "https://api.github.com/users/renjithsasidharan/events{/privacy}",
"received_events_url": "https://api.github.com/users/renjithsasidharan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | Let's assume the model predicts the following for an input sequence.
```
The O
creation. O
date O
is O
27 B-DATE
Aug I-DATE
2020 I-DATE
and O
update. O
date. O
is O
01-09-2020 B-DATE
```
How do you pick the best candidate for **creation date** from logist values? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4428/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4427/comments | https://api.github.com/repos/huggingface/transformers/issues/4427/events | https://github.com/huggingface/transformers/pull/4427 | 619,904,711 | MDExOlB1bGxSZXF1ZXN0NDE5MjU0NzAx | 4,427 | Refactored the README.md file | {
"login": "girishponkiya",
"id": 2093282,
"node_id": "MDQ6VXNlcjIwOTMyODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2093282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/girishponkiya",
"html_url": "https://github.com/girishponkiya",
"followers_url": "https://api.github.com/users/girishponkiya/followers",
"following_url": "https://api.github.com/users/girishponkiya/following{/other_user}",
"gists_url": "https://api.github.com/users/girishponkiya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/girishponkiya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/girishponkiya/subscriptions",
"organizations_url": "https://api.github.com/users/girishponkiya/orgs",
"repos_url": "https://api.github.com/users/girishponkiya/repos",
"events_url": "https://api.github.com/users/girishponkiya/events{/privacy}",
"received_events_url": "https://api.github.com/users/girishponkiya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Ok for you @savasy?",
"Ya for sure @julien-c \r\nthanks a lot",
"Thanks @girishponkiya!",
"Thanks @girishponkiya !"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4427/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4427",
"html_url": "https://github.com/huggingface/transformers/pull/4427",
"diff_url": "https://github.com/huggingface/transformers/pull/4427.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4427.patch",
"merged_at": 1589896585000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4426 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4426/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4426/comments | https://api.github.com/repos/huggingface/transformers/issues/4426/events | https://github.com/huggingface/transformers/issues/4426 | 619,879,741 | MDU6SXNzdWU2MTk4Nzk3NDE= | 4,426 | Lack of funetune examples for T5 model | {
"login": "MagicFrogSJTU",
"id": 8948386,
"node_id": "MDQ6VXNlcjg5NDgzODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8948386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MagicFrogSJTU",
"html_url": "https://github.com/MagicFrogSJTU",
"followers_url": "https://api.github.com/users/MagicFrogSJTU/followers",
"following_url": "https://api.github.com/users/MagicFrogSJTU/following{/other_user}",
"gists_url": "https://api.github.com/users/MagicFrogSJTU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MagicFrogSJTU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MagicFrogSJTU/subscriptions",
"organizations_url": "https://api.github.com/users/MagicFrogSJTU/orgs",
"repos_url": "https://api.github.com/users/MagicFrogSJTU/repos",
"events_url": "https://api.github.com/users/MagicFrogSJTU/events{/privacy}",
"received_events_url": "https://api.github.com/users/MagicFrogSJTU/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I've setup T5 fine-tuning using lightning and also HF's new Trainer. I can submit a PR for that. Would like to hear from @patrickvonplaten ",
"It would be awesome if you could open a PR for this! ",
"Great! I'll organize my examples and submit PR as soon as I finish it.\r\n\r\n ",
"@Chenyzsjtu @patrickvonplaten Could you please suggest me a good task for this ? I've fine-tuned T5 on mostly non-generative tasks (IMDB sentiment, Emotion classification, SWAG multiple choice, SQuAD1.1) and 2 generative tasks, cnn/dm and question generation. Which tasks should I consider adding ?",
"The GLUE and SuperGLUE tasks would be an obvious choice (mainly classification though). The [DecaNLP](http://decanlp.com/) tasks also have a nice mix of classification and generation.",
"> @Chenyzsjtu @patrickvonplaten Could you please suggest me a good task for this ? I've fine-tuned T5 on mostly non-generative tasks (IMDB sentiment, Emotion classification, SWAG multiple choice, SQuAD1.1) and 2 generative tasks, cnn/dm and question generation. Which tasks should I consider adding ?\r\n\r\nThere are many benchmarks tested in the original paper. Since we only need a example for demonstration purpose, a single task in GLUE or SuperGLUE should be enough. \r\nMayber MRPC? It needs less training steps, and was finetuned by itself rather than by the GLUE mixture as descriped in paper. Plus, it is also the example for bert here in examples/text-classification.",
"@ghomasHudson @Chenyzsjtu\r\nDecaNLP sounds good. So we can include one generative task and one non-generative.\r\nLet's see what @patrickvonplaten says then I'll move ahead with this.\r\n\r\nTill then can you check my fine-tuning examples and give me some feedback. Here are the notebooks.\r\n\r\nFor SQuAD [here](https://colab.research.google.com/drive/176NSaYjc2eeI-78oLH_F9-YV3po3qQQO?usp=sharing) \r\nFor (IMDB sentiment, Emotion classification, SWAG multiple choice) [here](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)",
"That's a great notebook! \r\n\r\nAlso note that you can now also use our `nlp` library, here: https://github.com/huggingface/nlp which will reduce your whole data preprocessing code to just a couple of lines. I think we have all the datasets you are using in your notebook(s) in the library :-). \r\n\r\nI think @sshleifer and @julien-c have worked more on the examples lately, so they probably would know better how to integrate it. @julien-c, @sshleifer - do you think we can add a pytorch lightning T5 notebook to our examples? ",
"@patrickvonplaten \r\nYes, using nlp library makes more sense. The SQuAD notebook above uses nlp library for data processing. Just ~10 lines of data processing code, and also uses HF trainer instead of lightning. So I have both the trainers ready, lightning as well as HF trainer.\r\n\r\nIMO we should use HF trainer instead of lightning since most of the examples now use HF trainer. Converting above tasks in HF trainer is fairly easy.",
"Only just saw the SQuAD notebook - amazing! \r\n\r\nOk, we had some internal discussions on how to add notebooks and decided to add a table to the README as shown in this PR: https://github.com/huggingface/transformers/pull/4441. @patil-suraj I use your SQuAD notebook as an example of how a notebook could be added. Can you maybe check if that's ok for you? \r\n\r\nIf that's fine for you I'll merge the PR and you can also add the other notebook for IMDB, Emotion classification, ... in a new PR - I would be awesome if you could also use `nlp` there, but you don't have to add it. Everything that's useful is welcome :-) ",
"@patrickvonplaten \r\nThank you for considering this! This sounds good to me.\r\nI'll also use the `nlp` library in the other notebook and open another PR for that.",
"> @patrickvonplaten\r\n> Thank you for considering this! This sounds good to me.\r\n> I'll also use the `nlp` library in the other notebook and open another PR for that.\r\n\r\nThat sounds awesome :-) ",
"I’ve also worked on an example notebook for tweet sentiment span extraction with T5 that I can share around this weekend (kaggle compe dataset).\n\nWould it be ok to PR this as well? Would I have to add the dataset to nlp? 🙂",
"For sure feel free to open a PR :-) It would be nice if you use `nlp`, but that's definitely not a must! \r\nWe are happy about every community notebook :-) ",
"@patil-suraj \r\nThanks a lot for your contribution of fine-tuning notebook!\r\nI notice that in the notebook your final performance for SQuAD1.1 on t5-base is:\r\n\"{'exact_match': 81.56102175969725, 'f1': 89.96016967193422}\"\r\nbut in the paper it is: F1/EM = 92.08/85.44\r\nIt seems that there is something we need to take care of here.\r\n",
"@Chenyzsjtu \r\nThe goal of the notebook was to get T5 working on TPU and show how we can fine-tune it for QA. So I didn't pay much attention to exact metrics. You can train it by following the learning rate and number of epochs used in the paper. That might increase it. ",
"> @Chenyzsjtu\r\n> The goal of the notebook was to get T5 working on TPU and show how we can fine-tune it for QA. So I didn't pay much attention to exact metrics. You can train it by following the learning rate and number of epochs used in the paper. That might increase it.\r\n\r\nI will have a try. Thanks!",
"> @Chenyzsjtu\r\n> The goal of the notebook was to get T5 working on TPU and show how we can fine-tune it for QA. So I didn't pay much attention to exact metrics. You can train it by following the learning rate and number of epochs used in the paper. That might increase it.\r\n\r\nThere is one more tiny problem...\r\nHave you tried evaluating the very first checkpoint (the pretrained model itself) on SQuAD?\r\nIt seems that your posted finetune-performance\r\n\"{'exact_match': 81.56102175969725, 'f1': 89.96016967193422}\"\r\nis worse than that of the pretrained model, which is\r\n83.122/90.958\r\n",
"Hmm, interesting. I'll have a look. ",
"@patil-suraj hi, I'm very new to `t5`. How can use `t5` for sentiment classification (simply just binary). I want to try on [this data sets](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) but don't know how to approach. I have bit understanding in nlp. Would anyone please suggest. AFAIK, `t5` performs `text-to-text`, so if I want to make binary (numeric), I've to map the 1 and 0 as positive and negative. ",
"Hi @Lincoln93 \r\nYou are right, you can map 0 and 1 as positive and negative and ask the model to predict the text.\r\nHave a look at [this](https://colab.research.google.com/drive/176NSaYjc2eeI-78oLH_F9-YV3po3qQQO?usp=sharing) notebook. It shows how to fine-tune t5 for binary as well as multi-class classification. ",
"We have a bunch of T5 notebooks now thanks to you guys :-) Closing the issue...",
"@patil-suraj Very cool notebooks indeed!",
"Hi @patil-suraj awesome notebooks! I noticed you always call `model.generate(...)` to evaluate, I wonder, is there a reason for this, and is that really necessary for `t5`? why not just use simple inference? `model(**inputs)` like BERT and others do?\r\n\r\n",
"> Hi @patil-suraj awesome notebooks! I noticed you always call `model.generate(...)` to evaluate, I wonder, is there a reason for this, and is that really necessary for `t5`? why not just use simple inference? `model(**inputs)` like BERT and others do?\r\n\r\nYou may need n-gram generation for more correct sentences?",
"> Hi @patil-suraj awesome notebooks! I noticed you always call `model.generate(...)` to evaluate, I wonder, is there a reason for this, and is that really necessary for `t5`? why not just use simple inference? `model(**inputs)` like BERT and others do?\r\n\r\nHi @saareliad , BERT models are mostly used for discriminative tasks i.e (classification, token classification, span extraction), so you just need to call the `model(**input)` only once. Where as T5 is a seq-to-seq generative model, which generates a single token at a time.\r\n\r\nSo to sample a sequence without `.generate` \r\n1. feed in the start token as `input_ids` to `forward`\r\n2. sample the next token by `argmax`\r\n3. add that token to `input_ids`\r\n4. repeat until you reach max len or sample `eos`\r\n\r\nthis quickly becomes complicated if you want beam search, or other sampling methods like top-k, top-p, temperature etc. So `.generate` is actually a powerful wrapper for all SOTA decoding methods. \r\n\r\nCheck [this](https://huggingface.co/blog/how-to-generate) awesome blog post by @patrickvonplaten to see what `.generate` has to offer",
"Thanks @patil-suraj ,\r\n\r\nIf we reduce the problem just to SQUAD, If I'm not wrong the extra `.generate` features are not used there at all?\r\n\r\nFor example, according the the code of your squad example:\r\n```\r\nanswers = []\r\nfor batch in tqdm(dataloader):\r\n outs = model.generate(input_ids=batch['input_ids'], \r\n attention_mask=batch['attention_mask'],\r\n max_length=16,\r\n early_stopping=True)\r\n outs = [tokenizer.decode(ids) for ids in outs]\r\n answers.extend(outs)\r\n```\r\nsince I didn't see there are beams for squad, `early_stopping=True` is not needed, and what happens is, more or less, the loop you described?\r\n\r\n \r\n\r\nI ask because I experience similar problem to what you had with TPU - I have to choose between running generate on CPU or running the aforementioned simplistic version on many (8-40) GPUs, which of course will be much faster even without using cache/past.",
"Hi,\r\nIs there an example showing T5 is finetuned on multiple tasks? with allowing to access the model architecture? thanks",
"Hi @rabeehk \r\nby multiple tasks do you mean multitask or different tasks ?\r\nif it's the latter, the this community [notebook ](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) shows how to fine-tune T5 for different tasks.\r\n\r\nIf multitask, then have a look at this [project ](https://github.com/patil-suraj/question_generation) which fine-tunes T5 for question generation, QA and answer extraction.",
"Hi\nI mean a mixture of multiple tasks like the original T5 paper on TPU so to\nrun efficiently for large scale and large datasets. Is there an\nexample/script by huggingface showing it?\nthanks alot\n\nOn Thu, Oct 22, 2020, 4:10 PM Suraj Patil <[email protected]> wrote:\n\n> Hi @rabeehk <https://github.com/rabeehk>\n> by multiple tasks do you mean multitask or different tasks ?\n> if it's the latter, the this community notebook\n> <https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb>\n> shows how to fine-tune T5 for different tasks.\n>\n> If multitask, then have a look at this project\n> <https://github.com/patil-suraj/question_generation> which fine-tunes T5\n> for question generation, QA and answer extraction.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/4426#issuecomment-714521374>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCA7OJV3PS6ECZXEGKTSMA4N3ANCNFSM4NDWJKVA>\n> .\n>\n"
] | 1,589 | 1,603 | 1,591 | NONE | null | # 🚀 Feature request
It seems like examples under transformers/examples doesn't support T5 except for translation.
## Motivation
We need more examples! It should be easy for some simple benchmarks.
## Your contribution
None currently.. But I am working on it!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4426/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4426/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4425/comments | https://api.github.com/repos/huggingface/transformers/issues/4425/events | https://github.com/huggingface/transformers/issues/4425 | 619,876,417 | MDU6SXNzdWU2MTk4NzY0MTc= | 4,425 | BERT and other models pretraining from scratch example | {
"login": "hairzooc",
"id": 13031514,
"node_id": "MDQ6VXNlcjEzMDMxNTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/13031514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hairzooc",
"html_url": "https://github.com/hairzooc",
"followers_url": "https://api.github.com/users/hairzooc/followers",
"following_url": "https://api.github.com/users/hairzooc/following{/other_user}",
"gists_url": "https://api.github.com/users/hairzooc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hairzooc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hairzooc/subscriptions",
"organizations_url": "https://api.github.com/users/hairzooc/orgs",
"repos_url": "https://api.github.com/users/hairzooc/repos",
"events_url": "https://api.github.com/users/hairzooc/events{/privacy}",
"received_events_url": "https://api.github.com/users/hairzooc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"https://huggingface.co/blog/how-to-train",
"Thank you for your swift reply :)\r\nHow about Electra model? Is it possible to pretrain from scratch as well?",
"Did you read the article? Section 3",
"Yup, I've read Section 3. :)\r\nAs long as I know Electra uses replaced token detection with discriminator and generator (GAN style).\r\nThat's why I thought that there could be something different from BERT-like masked lm.\r\nAnd I found the open issue below as well.\r\n\r\nhttps://github.com/huggingface/transformers/issues/3878\r\n\r\n",
"I modified https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py script few days ago for training electra from scratch. But there were some problems(maybe bugs) i had to solve for this task. \r\n\r\nCurrently I´m setting up a clean running version for training a electra language model from scratch with an additional document classification head based on the script.",
"I got it. Thank you for your effort!",
"@miketrimmel Hi, Is there still a bug if I try to train electra from scratch using run_language_modeling.py or it is available now? Thanks!",
"I had issues with the tb_writer. i tried it for new now and there were no issues with the writer any more(maybe I had an old version).\r\nIf you´re using a pretrained tokenizer it should work now. Training a new tokenizer is not supported. I have to say I´m new into the tokenization things. I´m training a Twitter language model from scratch so i wasn´t sure if the model will perform as good with the pretrained tokenizer (can be that there is a lot of vocabulary missing because of the \"Twitter-slang\"). So I trained a custom tokenizer. I will verify the different tokenizers the next days. I will also provide the model and tokenizer when its finished if someone wants to fine-tune it on his Twitter task.",
"Great! Thanks for explanation :)",
"@miketrimmel Could you please share the code for pretraining electra from scratch?",
"Yes, I will share it the next days here. Actually I´m busy with other things and I have to make it pretty before :D ",
"Could i know what's the meaning of \"You are instantiating a new tokenizer from scratch. This is not supported, but you can do it from another script, save it, and load it from here, using --tokenizer_name\" @miketrimmel \r\n\r\n\r\nCould i use a tokenizer from `https://github.com/huggingface/tokenizers` for initiation? I'd like to train a model from scratch.\r\n\r\n\r\n",
"Yes you could use a tokenizer from https://github.com/huggingface/tokenizers. But there is no batch_encode_plus method. I used the solution from another issue https://github.com/huggingface/tokenizers/issues/259 here. The solution with the wrapper from @theblackcat102 worked for me.",
"There is code for training ELECTRA from scratch still undergoing testing here https://github.com/huggingface/transformers/pull/4656\r\n\r\nIt's still under development but it pretty stable now.",
"> I modified https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py script few days ago for training electra from scratch. But there were some problems(maybe bugs) i had to solve for this task.\r\n> \r\n> Currently I´m setting up a clean running version for training a electra language model from scratch with an additional document classification head based on the script.\r\n\r\nAny chance you could share the code? I've been trying to do this myself, but am failing at getting results (whether in finetuning, or in running electra with TF in HF). Thanks!",
"can you give me some advices about how to pretrain the bart model on my own dataset? thank you soooooo much",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Detailed Explanation\r\nhttps://mlcom.github.io/",
"> I modified https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py script few days ago for training electra from scratch. But there were some problems(maybe bugs) i had to solve for this task.\r\n> \r\n> Currently I´m setting up a clean running version for training a electra language model from scratch with an additional document classification head based on the script.\r\n\r\nlocation is currently not available...please share the exact location",
"> Detailed Explanation\r\n> https://mlcom.github.io/Create-Language-Model/\r\n\r\nlocation is currently not available...please share the exact location",
"> > Detailed Explanation\r\n> > https://mlcom.github.io/Create-Language-Model/\r\n> \r\n> location is currently not available...please share the exact location\r\n\r\nmlcom.github.io"
] | 1,589 | 1,628 | 1,601 | NONE | null | Hi,
I've been finetuning lots of tasks using this repo. Thanks :)
But I couldn't find any pretraining from scratch examples.
Please let me know if you guys have any advices on that.
It would be very helpful for me to do my research.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4425/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4425/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4424/comments | https://api.github.com/repos/huggingface/transformers/issues/4424/events | https://github.com/huggingface/transformers/pull/4424 | 619,861,543 | MDExOlB1bGxSZXF1ZXN0NDE5MjIxMDcw | 4,424 | Update README.md (model_card) | {
"login": "sy-wada",
"id": 62933006,
"node_id": "MDQ6VXNlcjYyOTMzMDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/62933006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sy-wada",
"html_url": "https://github.com/sy-wada",
"followers_url": "https://api.github.com/users/sy-wada/followers",
"following_url": "https://api.github.com/users/sy-wada/following{/other_user}",
"gists_url": "https://api.github.com/users/sy-wada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sy-wada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sy-wada/subscriptions",
"organizations_url": "https://api.github.com/users/sy-wada/orgs",
"repos_url": "https://api.github.com/users/sy-wada/repos",
"events_url": "https://api.github.com/users/sy-wada/events{/privacy}",
"received_events_url": "https://api.github.com/users/sy-wada/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Yes, looks good now"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | - add a citation.
- modify the table of the BLUE benchmark.
The table of the first version was not displayed correctly on https://huggingface.co/seiya/oubiobert-base-uncased.
Could you please confirm that this fix will allow you to display it correctly? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4424/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4424",
"html_url": "https://github.com/huggingface/transformers/pull/4424",
"diff_url": "https://github.com/huggingface/transformers/pull/4424.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4424.patch",
"merged_at": 1589840298000
} |
https://api.github.com/repos/huggingface/transformers/issues/4423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4423/comments | https://api.github.com/repos/huggingface/transformers/issues/4423/events | https://github.com/huggingface/transformers/issues/4423 | 619,822,455 | MDU6SXNzdWU2MTk4MjI0NTU= | 4,423 | How to change transformers model embedding layer weights | {
"login": "acmilannesta",
"id": 47703762,
"node_id": "MDQ6VXNlcjQ3NzAzNzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/47703762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/acmilannesta",
"html_url": "https://github.com/acmilannesta",
"followers_url": "https://api.github.com/users/acmilannesta/followers",
"following_url": "https://api.github.com/users/acmilannesta/following{/other_user}",
"gists_url": "https://api.github.com/users/acmilannesta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/acmilannesta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/acmilannesta/subscriptions",
"organizations_url": "https://api.github.com/users/acmilannesta/orgs",
"repos_url": "https://api.github.com/users/acmilannesta/repos",
"events_url": "https://api.github.com/users/acmilannesta/events{/privacy}",
"received_events_url": "https://api.github.com/users/acmilannesta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | I trained my own tokenizer and added new words. Now I need to change the embedding size from the pretrained model. What I do is like this:
```
import transformers as tfm
import tensorflow as tf
backbone = tfm.TFRobertaModel.from_pretrained(PRETRAINED_PATH, output_hidden_states=True)
add_emb = tf.random.uniform(shape=(new_vocab_size, 768), minval=-1., maxval=1.)
new_emb = tf.concat((backbone.roberta.embeddings.word_embeddings, add_emb), 0)
backbone.roberta.weights[194] = new_emb
```
However, the shape of embedding weight is still the original vocab size.
But if I do
```
backbone = tfm.TFRobertaModel.from_pretrained(PRETRAINED_PATH, output_hidden_states=True)
add_emb = tf.random.uniform(shape=(new_vocab_size, 768), minval=-1., maxval=1.)
backbone.roberta.embeddings.word_embeddings= tf.concat((backbone.roberta.embeddings.word_embeddings, add_emb), 0)
```
Then the embedding weights will be removed from model ```trainable_weights``` and has only 198 elements instead of 199 in the original list.
Am I doing something wrong to change the embedding weights? Thanks!
The original stack overflow question is also posted:
https://stackoverflow.com/questions/61860156/how-to-change-transformers-model-embedding-layer-weights | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4423/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4422/comments | https://api.github.com/repos/huggingface/transformers/issues/4422/events | https://github.com/huggingface/transformers/pull/4422 | 619,817,442 | MDExOlB1bGxSZXF1ZXN0NDE5MTg4MzE4 | 4,422 | [T5 Conf] rename docstring to acuatly argument names | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | As mentioned in issue: #4139, the docstring names of the T5 Config are confusing since those names cannot be used to set the arguments.
This PR renames the arguments in the docstring and adds an explanation that those arguments can also be accessed via their properties.
To not break backward compatibility, renaming the docstring is better than renaming the actual variables IMO. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4422/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4422",
"html_url": "https://github.com/huggingface/transformers/pull/4422",
"diff_url": "https://github.com/huggingface/transformers/pull/4422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4422.patch",
"merged_at": 1589815896000
} |
https://api.github.com/repos/huggingface/transformers/issues/4421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4421/comments | https://api.github.com/repos/huggingface/transformers/issues/4421/events | https://github.com/huggingface/transformers/pull/4421 | 619,815,348 | MDExOlB1bGxSZXF1ZXN0NDE5MTg2OTcz | 4,421 | [test_pipelines] Mark tests > 10s @slow, small speedups | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | - pass in num_beams=2 to `SummarizationPipelines` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4421/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4421",
"html_url": "https://github.com/huggingface/transformers/pull/4421",
"diff_url": "https://github.com/huggingface/transformers/pull/4421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4421.patch",
"merged_at": 1589819001000
} |
https://api.github.com/repos/huggingface/transformers/issues/4420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4420/comments | https://api.github.com/repos/huggingface/transformers/issues/4420/events | https://github.com/huggingface/transformers/issues/4420 | 619,812,602 | MDU6SXNzdWU2MTk4MTI2MDI= | 4,420 | BERT Tokenization problem when the input string has a "." in the string, like floating number | {
"login": "wenhuchen",
"id": 1457702,
"node_id": "MDQ6VXNlcjE0NTc3MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1457702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wenhuchen",
"html_url": "https://github.com/wenhuchen",
"followers_url": "https://api.github.com/users/wenhuchen/followers",
"following_url": "https://api.github.com/users/wenhuchen/following{/other_user}",
"gists_url": "https://api.github.com/users/wenhuchen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wenhuchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wenhuchen/subscriptions",
"organizations_url": "https://api.github.com/users/wenhuchen/orgs",
"repos_url": "https://api.github.com/users/wenhuchen/repos",
"events_url": "https://api.github.com/users/wenhuchen/events{/privacy}",
"received_events_url": "https://api.github.com/users/wenhuchen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using: Tokenizer
* [ ] the official example scripts: (give details below) N/A
* [ ] my own modified scripts: (give details below) N/A
The tasks I am working on is: Any
* [ ] an official GLUE/SQUaD task: (give the name) N/A
* [ ] my own task or dataset: (give details below) N/A
## To reproduce
Steps to reproduce the behavior:
1. Load any BERT tokenizer
2. Tokenize something with a "." in between
3. Decode these ids, you will find it mismatch
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
x = tokenizer.encode('AN.C', add_special_tokens=False)
z = tokenizer.decode(x)
```
It prints:
```
AN. C
```
## Expected behavior
```
AN.C
```
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu
- Python version: 3.6
- PyTorch version (GPU?): GPU
- Tensorflow version (GPU?): GPU
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4420/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4419/comments | https://api.github.com/repos/huggingface/transformers/issues/4419/events | https://github.com/huggingface/transformers/pull/4419 | 619,803,944 | MDExOlB1bGxSZXF1ZXN0NDE5MTc4ODc3 | 4,419 | [TF generate] Fix issue for batch output generation of different output length. | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | This PR fixes the issue: #4088.
A wrong variable was used in TF generate to determine the sentence length in the case when multiple outputs have different sentence lengths and the max sentence lengths is < the user defined `max_length`.
Also, both PT and TF generate are refactored a bit so that the `cur_length` variable is incremented directly after the `input_ids` are incremented. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4419/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4419/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4419",
"html_url": "https://github.com/huggingface/transformers/pull/4419",
"diff_url": "https://github.com/huggingface/transformers/pull/4419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4419.patch",
"merged_at": 1589809901000
} |
https://api.github.com/repos/huggingface/transformers/issues/4418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4418/comments | https://api.github.com/repos/huggingface/transformers/issues/4418/events | https://github.com/huggingface/transformers/issues/4418 | 619,786,818 | MDU6SXNzdWU2MTk3ODY4MTg= | 4,418 | Scaling text classification / reusing models | {
"login": "timsuchanek",
"id": 1094804,
"node_id": "MDQ6VXNlcjEwOTQ4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1094804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timsuchanek",
"html_url": "https://github.com/timsuchanek",
"followers_url": "https://api.github.com/users/timsuchanek/followers",
"following_url": "https://api.github.com/users/timsuchanek/following{/other_user}",
"gists_url": "https://api.github.com/users/timsuchanek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timsuchanek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timsuchanek/subscriptions",
"organizations_url": "https://api.github.com/users/timsuchanek/orgs",
"repos_url": "https://api.github.com/users/timsuchanek/repos",
"events_url": "https://api.github.com/users/timsuchanek/events{/privacy}",
"received_events_url": "https://api.github.com/users/timsuchanek/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can pretty easily \"freeze\" parameters you don't want to backpropagate against, in PyTorch: \r\n\r\n```python\r\nfor param in parameters:\r\n param.requires_grad = False\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | CONTRIBUTOR | null | If I have a system, where I want to train many text classifiers for many users, how could I go about it with the transformers library in a scalable way?
Right now I would have to run let's say a 10min training per user on a RTX 2080 ti for Albert for the dataset I have. That doesn't scale if I have thousands of users.
If I understand correctly, in the sequence classification models, the whole transformer model is being trained, so the backpropagation happens through the whole network.
However, if I now want to reuse the model for another user, maybe just passing in a bit more labeled data to customize a base classifier, how could I go about that?
It seems to me, that I would have to basically "freeze" the whole "Bert" model, no matter which one I would use, and then only train a thin layer on top.
One possibility I see would be KNN using sentence transformers, I already asked in the repo there https://github.com/UKPLab/sentence-transformers/issues/209
Maybe someone here has an idea which approach would make sense for such a situation.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4418/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4417/comments | https://api.github.com/repos/huggingface/transformers/issues/4417/events | https://github.com/huggingface/transformers/issues/4417 | 619,779,006 | MDU6SXNzdWU2MTk3NzkwMDY= | 4,417 | TypeError: add_() takes 1 positional argument but 2 were given | {
"login": "mqliu7",
"id": 38928187,
"node_id": "MDQ6VXNlcjM4OTI4MTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/38928187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mqliu7",
"html_url": "https://github.com/mqliu7",
"followers_url": "https://api.github.com/users/mqliu7/followers",
"following_url": "https://api.github.com/users/mqliu7/following{/other_user}",
"gists_url": "https://api.github.com/users/mqliu7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mqliu7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mqliu7/subscriptions",
"organizations_url": "https://api.github.com/users/mqliu7/orgs",
"repos_url": "https://api.github.com/users/mqliu7/repos",
"events_url": "https://api.github.com/users/mqliu7/events{/privacy}",
"received_events_url": "https://api.github.com/users/mqliu7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This was fixed on master on Friday, can you try pulling from master again?"
] | 1,589 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
## Information
I was trying to reproduce the GLUE fine-tuning example (https://huggingface.co/transformers/examples.html#fine-tuning-example) when I ran into this error:
```
File "~/anaconda3/lib/python3.7/site-packages/transformers/optimization.py", line 155, in step
exp_avg.mul_(beta1).add_(grad, 1.0 - beta1)
TypeError: add_() takes 1 positional argument but 2 were given
```
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
## To reproduce
Steps to reproduce the behavior:
Follow the official guide example here: https://huggingface.co/transformers/examples.html#fine-tuning-example
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Linux 5.4.0-29-generic x86_64 Ubuntu 20.04 LTS
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.0 (Yes)
- Tensorflow version (GPU?):
- Using GPU in script?: Yes, 1
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4417/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4417/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4416/comments | https://api.github.com/repos/huggingface/transformers/issues/4416/events | https://github.com/huggingface/transformers/pull/4416 | 619,765,802 | MDExOlB1bGxSZXF1ZXN0NDE5MTUxNDU1 | 4,416 | Fixed spelling of training | {
"login": "soham96",
"id": 18757535,
"node_id": "MDQ6VXNlcjE4NzU3NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/18757535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soham96",
"html_url": "https://github.com/soham96",
"followers_url": "https://api.github.com/users/soham96/followers",
"following_url": "https://api.github.com/users/soham96/following{/other_user}",
"gists_url": "https://api.github.com/users/soham96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soham96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soham96/subscriptions",
"organizations_url": "https://api.github.com/users/soham96/orgs",
"repos_url": "https://api.github.com/users/soham96/repos",
"events_url": "https://api.github.com/users/soham96/events{/privacy}",
"received_events_url": "https://api.github.com/users/soham96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Spelling of training was incorrect. Fixed it. Sorry for such a bad PR :( | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4416/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4416",
"html_url": "https://github.com/huggingface/transformers/pull/4416",
"diff_url": "https://github.com/huggingface/transformers/pull/4416.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4416.patch",
"merged_at": 1589815410000
} |
https://api.github.com/repos/huggingface/transformers/issues/4415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4415/comments | https://api.github.com/repos/huggingface/transformers/issues/4415/events | https://github.com/huggingface/transformers/issues/4415 | 619,760,690 | MDU6SXNzdWU2MTk3NjA2OTA= | 4,415 | GPT2 perplexity rolling/striding way for evaluating a document. | {
"login": "sb1992",
"id": 10261100,
"node_id": "MDQ6VXNlcjEwMjYxMTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/10261100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sb1992",
"html_url": "https://github.com/sb1992",
"followers_url": "https://api.github.com/users/sb1992/followers",
"following_url": "https://api.github.com/users/sb1992/following{/other_user}",
"gists_url": "https://api.github.com/users/sb1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sb1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sb1992/subscriptions",
"organizations_url": "https://api.github.com/users/sb1992/orgs",
"repos_url": "https://api.github.com/users/sb1992/repos",
"events_url": "https://api.github.com/users/sb1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/sb1992/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | As I understand GPT2 uses TextDataset as a loader and it produces the example list in block sizes. So say we have a sentence "**we are in a climate crisis**" and have block size 3. So this will produce the example list as
`ex = [["we","are","in"],["a","climate","crisis"]]`
So in such a scenario when calculating overall perplexity for the document the word "a" has no previous context and "climate" only has "a" as context. What I would want would ideally the context to be a in rolling/striding way. So I edited the text loader to produce a list like:
`ex = [["we","are","in"],["are","in","a"],["in,"a","climate],["a","climate","crisis"]]`
Now, if I calculate perplexity for this ex list surely a lot of words will be counted multiple times as they are in the lists in multiple times. But in in a way for every instance from 2nd onwards (ex[1] above) i Would only want to consider last word for perplexity/loss calculation. So my query is how to tackle it, should I use attention mask so that say in the case of
`ex[1] =["are","in","a"] `
with mask of [0,0,1] for loss calculation (and hence perplexity) it only takes "a" into account but for getting the context of "a' will also take previous 2 words in account?
Any help on the best way to approach this problem will be much appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4415/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4414/comments | https://api.github.com/repos/huggingface/transformers/issues/4414/events | https://github.com/huggingface/transformers/issues/4414 | 619,733,445 | MDU6SXNzdWU2MTk3MzM0NDU= | 4,414 | Get BERT sentence encoding | {
"login": "orenkobo",
"id": 50831837,
"node_id": "MDQ6VXNlcjUwODMxODM3",
"avatar_url": "https://avatars.githubusercontent.com/u/50831837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orenkobo",
"html_url": "https://github.com/orenkobo",
"followers_url": "https://api.github.com/users/orenkobo/followers",
"following_url": "https://api.github.com/users/orenkobo/following{/other_user}",
"gists_url": "https://api.github.com/users/orenkobo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orenkobo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orenkobo/subscriptions",
"organizations_url": "https://api.github.com/users/orenkobo/orgs",
"repos_url": "https://api.github.com/users/orenkobo/repos",
"events_url": "https://api.github.com/users/orenkobo/events{/privacy}",
"received_events_url": "https://api.github.com/users/orenkobo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I believe you can't do like that, you have to run the model just as is with all the necessary inputs(pertaining to the sentence) as mentioned in the docs : https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel\r\nand then add the configuration : `config.output_hidden_states=True` for getting the embeddings from each intermediate encoding layers. ",
"@Sriharsha-hatwar Thanks, do you have a code sample maybe?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | I am trying to access to encoding of sentences in the vaious layers in pre-trained BERT model.
So it should be something like this:
```
sentence = 'We bought a new car'
bert_encoder = load_encoder('bert-base-uncased')
enc = bert_encoder.encode(sentence)
enc.get_layer[0] #this is the first layer
enc.get_layer[-1] #this is the last layer
```
What is the best way to do it?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4414/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.