url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/7421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7421/comments | https://api.github.com/repos/huggingface/transformers/issues/7421/events | https://github.com/huggingface/transformers/issues/7421 | 710,102,458 | MDU6SXNzdWU3MTAxMDI0NTg= | 7,421 | "Sequence Classification with IMDb Reviews " error, when using "bert-base-multilingual-cased" model. | {
"login": "baiziyuandyufei",
"id": 20787650,
"node_id": "MDQ6VXNlcjIwNzg3NjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/20787650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baiziyuandyufei",
"html_url": "https://github.com/baiziyuandyufei",
"followers_url": "https://api.github.com/users/baiziyuandyufei/followers",
"following_url": "https://api.github.com/users/baiziyuandyufei/following{/other_user}",
"gists_url": "https://api.github.com/users/baiziyuandyufei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baiziyuandyufei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baiziyuandyufei/subscriptions",
"organizations_url": "https://api.github.com/users/baiziyuandyufei/orgs",
"repos_url": "https://api.github.com/users/baiziyuandyufei/repos",
"events_url": "https://api.github.com/users/baiziyuandyufei/events{/privacy}",
"received_events_url": "https://api.github.com/users/baiziyuandyufei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It looks like you are using a model for language modeling (`AutoModelWithLMHead`) instead of a model for sequence classification (`AutoModelForSequenceClassification`) which is why you have that shape error.",
"@sgugger thank you!\r\nI modify my code, then everything well.\r\n```\r\n# coding:utf-8\r\n\"\"\"\r\n\"\"\"\r\n\r\nfrom pathlib import Path\r\nfrom sklearn.model_selection import train_test_split\r\nimport torch\r\nfrom transformers import Trainer, TrainingArguments\r\nfrom nlp import load_dataset\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"model/bert-base-multilingual-cased\") \r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"model/bert-base-multilingual-cased\")\r\n\r\n\r\ndef read_imdb_split(split_dir):\r\n split_dir = Path(split_dir)\r\n texts = []\r\n labels = []\r\n for label_dir in [\"pos\", \"neg\"]:\r\n for text_file in (split_dir/label_dir).iterdir():\r\n texts.append(text_file.read_text())\r\n labels.append(0 if label_dir is \"neg\" else 1)\r\n\r\n return texts, labels\r\n\r\ntrain_texts, train_labels = read_imdb_split('dataset/ChnSentiCorp')\r\ntest_texts, test_labels = read_imdb_split('dataset/ChnSentiCorp')\r\ntrain_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2)\r\n\r\n\r\ntrain_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=100, verbose=False)\r\nval_encodings = tokenizer(val_texts, truncation=True, padding=True, max_length=100, verbose=False)\r\ntest_encodings = tokenizer(test_texts, truncation=True, padding=True, max_length=100, verbose=False)\r\n\r\n\r\n\r\nclass IMDbDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings, labels):\r\n self.encodings = encodings\r\n self.labels = labels\r\n\r\n def __getitem__(self, idx):\r\n item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}\r\n item['labels'] = torch.tensor(self.labels[idx])\r\n return item\r\n\r\n def __len__(self):\r\n return len(self.labels)\r\n\r\ntrain_dataset = IMDbDataset(train_encodings, train_labels)\r\nval_dataset = IMDbDataset(val_encodings, val_labels)\r\ntest_dataset = IMDbDataset(test_encodings, test_labels)\r\n\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir='./results',\r\n num_train_epochs=16,\r\n per_device_train_batch_size=1,\r\n per_device_eval_batch_size=1,\r\n warmup_steps=500,\r\n weight_decay=0.01,\r\n evaluate_during_training=True,\r\n logging_dir='./logs',\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=test_dataset\r\n)\r\n\r\ntrainer.train()\r\n```\r\n\r\n```\r\nEpoch: 0%| | 0/16 [00:00<?, ?it/s]\r\nIteration: 2%|█▏ | 65/3200 [03:04<2:26:14, 2.80s/it]\r\n```"
] | 1,601 | 1,601 | 1,601 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform:macos
- Python version:3.6
- PyTorch version (GPU?):CPU
- Tensorflow version (GPU?):CPU
- Using GPU in script?:No
- Using distributed or parallel set-up in script?:No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [+] the official example scripts: (give details below)
* [+] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. reference the code [https://huggingface.co/transformers/custom_datasets.html#seq-imdb](url)
2. modify the code
```
# coding:utf-8
"""
"""
from pathlib import Path
from sklearn.model_selection import train_test_split
from transformers import DistilBertTokenizerFast
import torch
from transformers import Trainer, TrainingArguments
from nlp import load_dataset
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("bert-base-multilingual-cased")
model = AutoModelWithLMHead.from_pretrained("bert-base-multilingual-cased")
def read_imdb_split(split_dir):
split_dir = Path(split_dir)
texts = []
labels = []
for label_dir in ["pos", "neg"]:
for text_file in (split_dir/label_dir).iterdir():
texts.append(text_file.read_text())
labels.append(0 if label_dir is "neg" else 1)
return texts, labels
train_texts, train_labels = read_imdb_split('dataset/aclImdb/train')
test_texts, test_labels = read_imdb_split('dataset/aclImdb/test')
train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2)
train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=100)
val_encodings = tokenizer(val_texts, truncation=True, padding=True, max_length=100)
test_encodings = tokenizer(test_texts, truncation=True, padding=True, max_length=100)
class IMDbDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = IMDbDataset(train_encodings, train_labels)
val_dataset = IMDbDataset(val_encodings, val_labels)
test_dataset = IMDbDataset(test_encodings, test_labels)
training_args = TrainingArguments(
output_dir='./results',
num_train_epochs=1,
per_device_train_batch_size=16,
per_device_eval_batch_size=64,
warmup_steps=500,
weight_decay=0.01,
evaluate_during_training=True,
logging_dir='./logs',
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset
)
trainer.train()
```
3. the error info
```
ValueError: Expected input batch_size (1600) to match target batch_size (16).
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7421/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7420/comments | https://api.github.com/repos/huggingface/transformers/issues/7420/events | https://github.com/huggingface/transformers/pull/7420 | 710,086,584 | MDExOlB1bGxSZXF1ZXN0NDkzOTk5MDI1 | 7,420 | [RAG] Model cards - clean cards | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | MEMBER | null | Clean the four model cards | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7420/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7420",
"html_url": "https://github.com/huggingface/transformers/pull/7420",
"diff_url": "https://github.com/huggingface/transformers/pull/7420.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7420.patch",
"merged_at": 1601284119000
} |
https://api.github.com/repos/huggingface/transformers/issues/7419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7419/comments | https://api.github.com/repos/huggingface/transformers/issues/7419/events | https://github.com/huggingface/transformers/issues/7419 | 709,987,840 | MDU6SXNzdWU3MDk5ODc4NDA= | 7,419 | Cannot reproduce example token classification GermEval 2014 (German NER) dataset | {
"login": "GarrettLee",
"id": 22522377,
"node_id": "MDQ6VXNlcjIyNTIyMzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/22522377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GarrettLee",
"html_url": "https://github.com/GarrettLee",
"followers_url": "https://api.github.com/users/GarrettLee/followers",
"following_url": "https://api.github.com/users/GarrettLee/following{/other_user}",
"gists_url": "https://api.github.com/users/GarrettLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GarrettLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GarrettLee/subscriptions",
"organizations_url": "https://api.github.com/users/GarrettLee/orgs",
"repos_url": "https://api.github.com/users/GarrettLee/repos",
"events_url": "https://api.github.com/users/GarrettLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/GarrettLee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I find that after deleting the cache, the results can be reproduced. I guess that is because in my first attempt, I used the wrong arguments setting, and something is cached. Then although I fixed the setting later, the code alway loads from the wrong cache. "
] | 1,601 | 1,601 | 1,601 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Linux-4.4.0-131-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.3.1 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@stefan-it Please help.
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): bert-base-multilingual-cased
The problem arises when using:
* [x] the official example scripts: (give details below)
I am running the pytorch version:
transformers/examples/token-classification/run_ner.py
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
GermEval 2014 (German NER) dataset
## To reproduce
Steps to reproduce the behavior:
1. download dataset: https://drive.google.com/drive/folders/1kC0I2UGl2ltrluI9NqDjaQJGw5iliw_J?usp=sharing
2. Because our accese for training model can not access Internet, I download the pretrained model here: [https://huggingface.co/bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased), and put all the downloaded files at transformers/examples/token-classification/bert-base-multilingual-cased
3.
```
cat NER-de-train.tsv | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > train.txt.tmp
cat NER-de-dev.tsv | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > dev.txt.tmp
cat NER-de-test.tsv | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > test.txt.tmp
export MAX_LENGTH=128
export BERT_MODEL=./bert-base-multilingual-cased
python3 scripts/preprocess.py train.txt.tmp $BERT_MODEL $MAX_LENGTH > train.txt
python3 scripts/preprocess.py dev.txt.tmp $BERT_MODEL $MAX_LENGTH > dev.txt
python3 scripts/preprocess.py test.txt.tmp $BERT_MODEL $MAX_LENGTH > test.txt
cat train.txt dev.txt test.txt | cut -d " " -f 2 | grep -v "^$"| sort | uniq > labels.txt
export OUTPUT_DIR=germeval-model
export BATCH_SIZE=32
export NUM_EPOCHS=3
export SAVE_STEPS=750
export SEED=1
python3 run_ner.py --data_dir ./ \
--labels ./labels.txt \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_device_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--do_train \
--do_eval \
--do_predict
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The f1 score on evaluation and test should be `0.8784592370979806` and `0.8624150210424085` as the README write. However, by runing the script above, on one V100 GPU, I get `0.83919` on evaluation, and `0.81673` on test, much lower than expected.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7419/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7418/comments | https://api.github.com/repos/huggingface/transformers/issues/7418/events | https://github.com/huggingface/transformers/pull/7418 | 709,945,429 | MDExOlB1bGxSZXF1ZXN0NDkzODgwNTg1 | 7,418 | Blenderbot | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't really understand why we don't completely separate Blenderbot from Bart here. I thought we kind of agreed on not adding any if statements to existing models (also if it's only one) to make them work with new models.\r\n\r\nWith @sgugger's recent PRs that completely separate model files from each other (Roberta, Longformer, Electra from BERT), I don't see why we would not do the same here?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=h1) Report\n> Merging [#7418](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7296fea1d689f47de69fd45e438e42d65ca5a393?el=desc) will **increase** coverage by `1.89%`.\n> The diff coverage is `96.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7418 +/- ##\n==========================================\n+ Coverage 76.45% 78.35% +1.89% \n==========================================\n Files 181 184 +3 \n Lines 35781 35928 +147 \n==========================================\n+ Hits 27355 28150 +795 \n+ Misses 8426 7778 -648 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_blenderbot.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmxlbmRlcmJvdC5weQ==) | `95.83% <95.83%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.39% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.34% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.11% <100.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/configuration\\_blenderbot.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JsZW5kZXJib3QucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `86.08% <100.00%> (-0.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.98% <100.00%> (+0.87%)` | :arrow_up: |\n| [src/transformers/modeling\\_blenderbot.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ibGVuZGVyYm90LnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYmFydC5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `92.64% <100.00%> (+0.10%)` | :arrow_up: |\n| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=footer). Last update [7296fea...978290a](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Spent 2H getting Blender+Bart docs more consistent.\r\n\r\n`blenderbot.rst` just points there instead of repeating.\r\n\r\n\r\n",
"Thanks for all the help @sgugger and @LysandreJik and sorry for being difficult.",
"Thanks for cleaning the docstrings and making all nice and shiny to go with the rest of the docs!\r\nHopefully it's going to be easier now that the templates have been updated.",
"<img src=\"https://media.giphy.com/media/osjgQPWRx3cac/giphy.gif\"/>",
"@sshleifer @stephenroller is there a particular reason why the 9.4B one wasn't ported over?\r\n\r\nI know it was mentioned in the paper that the 9.4B wasn't statistically any better than the 2.7B one in human evaluations, but it'd still be a useful release IMO.",
"I think it was just a matter of prioritization? I didn't directly work on it.\n\nI can only speak for myself, not HF, but I would welcome a PR adding support for the 9.4. It should only be a configuration change compared the 2.7, I would think.",
"@stephenroller I see, can you point me to the 9.4B model artifact? I'll see if I can load that using the HF class with some tweaks.",
"If you manage to load it, we'd love to host it on https://huggingface.co/facebook \r\n\r\ncc @patrickvonplaten and others",
"Think @patil-suraj is working on it :-)",
"Hey @patil-suraj are you actively working on this and have an ETA? I want to start playing with the 9.4B within HF asap.",
"The files model files are in this tarfile:\r\n\r\nhttps://dl.fbaipublicfiles.com/parlai/_models/blender/BST9B.tgz\r\n\r\nCompared to the 2.7B, the following hyperparameters are expected to change (2.7B setting -> 9.4B setting):\r\n- embedding size: 2560 -> 4096\r\n- hidden state size (ffn size): 10240 -> 16384\r\n- number of encoder layers: 2 -> 4\r\n- number of decoder layers: 24 -> 32\r\n- number of heads: 32 -> 32 (unchanged, but one might expect it to)\r\n\r\nThe dictionary and formatting is _exactly_ the same as the 2.7B model.",
"Hi @g-karthik , I'm working on it, it should be on hub by the end of next week",
"@patil-suraj can you please share the PR so I can take a look? I do not see the model on the model hub."
] | 1,601 | 1,613 | 1,602 | CONTRIBUTOR | null | Continued from https://github.com/huggingface/transformers/pull/4803
Co-authored by @mariamabarham
New models: `facebook/blenderbot-3B` and `facebook/blenderbot-90M`.
They produce similar, but not always identical outputs to their facebook counterparts, with the differences due to length penalty implementations.
They are identical to bart, besides one layernorm change for the blenderbot 90M checkpoint
```
if self.do_blenderbot_90_layernorm:
x = self.layernorm_embedding(x)
x += positions
else:
x += positions
x = self.layernorm_embedding(x)
```
I also wrote a [gist](https://gist.github.com/sshleifer/cb245b8739420724a32fc0c22344aee0) explaining the various layernorm sequences. Will update it once this is finalized.
The blenderbot 3b tests can run on 1 GPU, but are ridiculously slow on CPU.
Additionally, `test_feedforward_chunking` and `test_model_outputs_equivalence` were flaky locally, and are currently skipped.
#### Done
[x] forward pass in one file
[x] passing integration tests
#### TODO:
- [ ] `blenderbot.rst`
- [ ] model cards
### Ways to avoid new if statement
- Don't port bbot-90m.
- separate `Blenderbot90Model`.
- There are also solutions where we parametrize out EncoderLayer/DecoderLayer, but these seem more confusing/harder to understand/less consistent to me. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7418/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7418",
"html_url": "https://github.com/huggingface/transformers/pull/7418",
"diff_url": "https://github.com/huggingface/transformers/pull/7418.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7418.patch",
"merged_at": 1602112164000
} |
https://api.github.com/repos/huggingface/transformers/issues/7417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7417/comments | https://api.github.com/repos/huggingface/transformers/issues/7417/events | https://github.com/huggingface/transformers/issues/7417 | 709,940,207 | MDU6SXNzdWU3MDk5NDAyMDc= | 7,417 | Add adapter support | {
"login": "salimmj",
"id": 22433912,
"node_id": "MDQ6VXNlcjIyNDMzOTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/22433912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/salimmj",
"html_url": "https://github.com/salimmj",
"followers_url": "https://api.github.com/users/salimmj/followers",
"following_url": "https://api.github.com/users/salimmj/following{/other_user}",
"gists_url": "https://api.github.com/users/salimmj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/salimmj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/salimmj/subscriptions",
"organizations_url": "https://api.github.com/users/salimmj/orgs",
"repos_url": "https://api.github.com/users/salimmj/repos",
"events_url": "https://api.github.com/users/salimmj/events{/privacy}",
"received_events_url": "https://api.github.com/users/salimmj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"That'd be great!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Just voicing my support of this too. Not sure yet if adapter-hub will end up having the resources to merge back into `transformers`. If they become inactive, me (and a lot of the community I'm sure) will want to try to take up that effort."
] | 1,601 | 1,614 | 1,607 | NONE | null | # 🚀 Feature request
Add [adapter](https://arxiv.org/abs/1902.00751) support to transformers.
## Motivation
Adapters are great time-and-memory-savers for multitask use cases and would be a great addition to this library. Some very kind folks added support for them ([AdapterHub](https://adapterhub.ml/)) on top of transformers library but unfortunately in order to use it one needs to use their [fork](https://github.com/Adapter-Hub/adapter-transformers) which is slightly inconvenient.
## Your contribution
They've done the integration already so I hope it's straightforward. I've posted [an issue](https://github.com/Adapter-Hub/adapter-transformers/issues/65) on their end as well and would be happy to help in any way I can.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7417/reactions",
"total_count": 10,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/7417/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7416/comments | https://api.github.com/repos/huggingface/transformers/issues/7416/events | https://github.com/huggingface/transformers/issues/7416 | 709,857,672 | MDU6SXNzdWU3MDk4NTc2NzI= | 7,416 | Possible error in MBart Tokenization script -- target lang code is only present in seq once | {
"login": "Sun694",
"id": 9062244,
"node_id": "MDQ6VXNlcjkwNjIyNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9062244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sun694",
"html_url": "https://github.com/Sun694",
"followers_url": "https://api.github.com/users/Sun694/followers",
"following_url": "https://api.github.com/users/Sun694/following{/other_user}",
"gists_url": "https://api.github.com/users/Sun694/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sun694/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sun694/subscriptions",
"organizations_url": "https://api.github.com/users/Sun694/orgs",
"repos_url": "https://api.github.com/users/Sun694/repos",
"events_url": "https://api.github.com/users/Sun694/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sun694/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Note: I did find (https://github.com/pytorch/fairseq/issues/2258), a related issue.\r\n\r\nAs far as I can tell, the behavior there (attempting to zero-shot translate without the model having translated before, and merely getting the input as output regardless of language ID in target), is expected behavior (some fine-tuning is required on at least one language pair). \r\n\r\nI believe, for the target, `lang_code, text, <\\s>, lang_code` is correct, and matches the paper.\r\n\r\n",
"I've spent a fair amount of time on the `mBart` tokenization. It's very complicated.\r\nI ran the finetuning command documented in the README [here](https://github.com/pytorch/fairseq/tree/master/examples/mbart#finetune-on-en-ro) and set a breakpoint and looked at the various tensors: https://gist.github.com/sshleifer/cba08bc2109361a74ac3760a7e30e4f4\r\n\r\nWhat you can clean from that is our `labels` match fairseq `samples['target']`.\r\n```python\r\nsample['target'][0]\r\ntensor([ 9345, 202, 10, 181684, 36, 21635, 8454, 48993, 45587,\r\n 21, 57476, 1283, 98748, 451, 346, 8916, 202, 28,\r\n 9, 7, 451, 11650, 128402, 5, 2, 250020],\r\n device='cuda:0')\r\n\r\n```\r\n\r\nSo huggingface batches match fairseq code, but not the paper. This seems to improve translation finetuning and inference accuracy.",
"> I've spent a fair amount of time on the `mBart` tokenization. It's very complicated.\r\n> I ran the finetuning command documented in the README [here](https://github.com/pytorch/fairseq/tree/master/examples/mbart#finetune-on-en-ro) and set a breakpoint and looked at the various tensors: https://gist.github.com/sshleifer/cba08bc2109361a74ac3760a7e30e4f4\r\n> \r\n> What you can clean from that is our `labels` match fairseq `samples['target']`.\r\n> \r\n> ```python\r\n> sample['target'][0]\r\n> tensor([ 9345, 202, 10, 181684, 36, 21635, 8454, 48993, 45587,\r\n> 21, 57476, 1283, 98748, 451, 346, 8916, 202, 28,\r\n> 9, 7, 451, 11650, 128402, 5, 2, 250020],\r\n> device='cuda:0')\r\n> ```\r\n> \r\n> So huggingface batches match fairseq code, but not the paper. This seems to improve translation finetuning and inference accuracy.\r\n\r\nIs there any chance to update the documentation, indicating this discrepancy?\r\n\r\nI am using this in a multilingual translation setting -- having the lang_code only last, not first and last, means the model is not told what language to translate to. This is not an issue in bilingual settings. I expected the tokenization to match the paper and the documentation.\r\n\r\n```\r\nThe source text format is X [eos, src_lang_code] where X is the source text. The target text format is `[tgt_lang_code] X [eos]`\r\n```\r\n\r\nLike before, if I'm not misunderstanding something, I'd be willing to open a PR for this.\r\n\r\nThanks for the quick response.",
"It's also worth noting that if there is no dedicated BOS (like MBart), then during inference, you have no natural way to tell the decoder to start generating during inference -- the model never has predicted the first token of a sequence.\r\n\r\nThe example at https://huggingface.co/transformers/master/model_doc/mbart.html#overview prepends the language code during inference, but if that is not done during training as well, this causes domain shift.\r\n\r\nUnless the decoder (or something else) is editing targets behind-the-scenes (beyond \"shifting\" indexes one during training), I believe the current method of preparing batches is introducing domain shift.",
"You are missing the distinction between `decoder_input_ids` and `labels` I think.\r\nFor `mbart-large-en-ro` we have `decoder_start_token_id=250020` for this reason.\r\n\r\nThen in [finetune.py](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L147):\r\n\r\n```python\r\ndecoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id)\r\noutputs = self(src_ids, attention_mask=src_mask, decoder_input_ids=decoder_input_ids, use_cache=False)\r\n```\r\n\r\n`shift_tokens_right` moves the language code to the 0th column of `decoder_input_ids`.\r\n\r\nYou can also read [this](https://github.com/huggingface/transformers/issues/6156#issuecomment-678537995) which is related.\r\n\r\nI would definitely welcome a contribution to the docs that explained this clearly!\r\n"
] | 1,601 | 1,602 | 1,602 | NONE | null | ## Environment info
- `transformers` version: current
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No.
### Who can help
MBart: @sshleifer
## Information
Model I am using is MBart.
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
```py
from transformers import MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro')
example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
batch: dict = tokenizer.prepare_seq2seq_batch(
example_english_phrase, src_lang="en_XX", tgt_lang="ro_RO", tgt_texts=expected_translation_romanian
)
```
```
-snip-
'labels': tensor([[ 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2, 250020]])}
```
The target language code is only present once in the target sequence.
`print(tokenizer.lang_code_to_id["ro_RO"])`
`250020`
## Expected behavior
```
'labels': tensor([[ 250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2, 250020]])}
```
Here, the target language code is first and last, as I believe MBart (https://arxiv.org/pdf/2001.08210.pdf, top of page 3) says.
MBart Excerpt:
```
For each instance of a batch we sample a language id symbol <LID> ...
sentences in the instance are separated by the end of sentence (</S>) token. Then, we append the selected<LID>
```
Here is the code I believe is wrong:
```py
def set_tgt_lang_special_tokens(self, lang: str) -> None:
"""Reset the special tokens to the target language setting. Prefix [tgt_lang_code], suffix =[eos]."""
self.cur_lang_code = self.lang_code_to_id[lang]
self.prefix_tokens = []
self.suffix_tokens = [self.eos_token_id, self.cur_lang_code]
```
To me, the comment implies the language code should be first as well.
I tested it locally, and merely adding `self.cur_lang_code` to `self.prefix_tokens` resolves the issue.
I do not know if I am misunderstanding the purpose of this script or misuing it. My above code is copied from the "MBartTokenizer" example at https://huggingface.co/transformers/master/model_doc/mbart.html#overview
If I didn't make a mistake, I'd be more than happy to open a PR to change that one lines and fix it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7416/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7416/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7415/comments | https://api.github.com/repos/huggingface/transformers/issues/7415/events | https://github.com/huggingface/transformers/issues/7415 | 709,796,948 | MDU6SXNzdWU3MDk3OTY5NDg= | 7,415 | Colab pro -fine RoBERTa error tcmalloc: large alloc 6325288960 | {
"login": "Shafi2016",
"id": 56795978,
"node_id": "MDQ6VXNlcjU2Nzk1OTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/56795978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shafi2016",
"html_url": "https://github.com/Shafi2016",
"followers_url": "https://api.github.com/users/Shafi2016/followers",
"following_url": "https://api.github.com/users/Shafi2016/following{/other_user}",
"gists_url": "https://api.github.com/users/Shafi2016/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shafi2016/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shafi2016/subscriptions",
"organizations_url": "https://api.github.com/users/Shafi2016/orgs",
"repos_url": "https://api.github.com/users/Shafi2016/repos",
"events_url": "https://api.github.com/users/Shafi2016/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shafi2016/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Got the same with StyleGAN using Colab Pro",
"What is your data size? First, try it with 2GB of data. Also check again by decreasing the GPU batch size to 8 or 4.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,601 | 1,619 | 1,619 | NONE | null | I want to fine-tune RoBERTa on my newspaper data (around 8GB) using Colab pro. It works fine on small data. I have given my codes below. Are my codes correct? **Is there any way to handle this memory problem**? The error crushes the colab.
tcmalloc: large alloc 6325288960 bytes == 0x447fa000 @ 0x7f3438dcc1e7 0x59221c 0x4ca6f4 0x566daa 0x5a4df1 0x5a5eea 0x4ce082 0x566c02 0x5a4df1 0x5a60ae 0x5bd138 0x50a47f 0x50c1f4 0x507f24 0x509202 0x594b01 0x54a17f 0x5517c1 0x5a9eec 0x50a783 0x50cfd6 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918
```
!python "/content/transformers/examples/language-modeling/run_language_modeling.py" \
--output_dir "/content/drive/My Drive/EPU-NLP/FinetuneModel/output" \
--model_type roberta \
--model_name_or_path roberta-base \
--do_train \
--per_gpu_train_batch_size 16 \
--seed 22 \
--train_data_file "/content/drive/My Drive/EPU-NLP/FinetuneModel/data_all.txt" \
--block_size 256 \
--line_by_line \
--weight_decay 0.01 \
--adam_epsilon 1e-6 \
--save_total_limit 500 \
--learning_rate 6e-4 \
--num_train_epochs 3 \
--save_total_limit 500 \
--save_steps 500 \
--mlm
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7415/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7415/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7414/comments | https://api.github.com/repos/huggingface/transformers/issues/7414/events | https://github.com/huggingface/transformers/issues/7414 | 709,769,902 | MDU6SXNzdWU3MDk3Njk5MDI= | 7,414 | GPT2LMHeadModel forward input | {
"login": "manzar96",
"id": 38495091,
"node_id": "MDQ6VXNlcjM4NDk1MDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/38495091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manzar96",
"html_url": "https://github.com/manzar96",
"followers_url": "https://api.github.com/users/manzar96/followers",
"following_url": "https://api.github.com/users/manzar96/following{/other_user}",
"gists_url": "https://api.github.com/users/manzar96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manzar96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manzar96/subscriptions",
"organizations_url": "https://api.github.com/users/manzar96/orgs",
"repos_url": "https://api.github.com/users/manzar96/repos",
"events_url": "https://api.github.com/users/manzar96/events{/privacy}",
"received_events_url": "https://api.github.com/users/manzar96/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! Your question would get more answers if you asked it over at https://discuss.huggingface.co, which are the forums for broad questions like this one. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | # ❓ Questions & Help
Hello,
I would like to fine-tune the GPT2 model on EmpatheticDialogues doing kind of conditional generation as like in this paper: https://arxiv.org/pdf/1911.11161.pdf
What concerns me is the format of the input_ids and labels in the forward function.
I think that concatenating the input with the target is a good solution separating them with a special token
(e.g. "hi! how are you? <endofinput> I am fine!)
However I am not sure what to do with the labels. Shall I mask all the input part and the padded tokens with -100 index and leave only the target part as is? or shall I mask with -100 only the padded tokens?
Thank you in advance :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7414/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7413/comments | https://api.github.com/repos/huggingface/transformers/issues/7413/events | https://github.com/huggingface/transformers/pull/7413 | 709,765,489 | MDExOlB1bGxSZXF1ZXN0NDkzNzQwMzI2 | 7,413 | [RAG] Clean Rag readme in examples | {
"login": "ola13",
"id": 1528523,
"node_id": "MDQ6VXNlcjE1Mjg1MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1528523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ola13",
"html_url": "https://github.com/ola13",
"followers_url": "https://api.github.com/users/ola13/followers",
"following_url": "https://api.github.com/users/ola13/following{/other_user}",
"gists_url": "https://api.github.com/users/ola13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ola13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ola13/subscriptions",
"organizations_url": "https://api.github.com/users/ola13/orgs",
"repos_url": "https://api.github.com/users/ola13/repos",
"events_url": "https://api.github.com/users/ola13/events{/privacy}",
"received_events_url": "https://api.github.com/users/ola13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=h1) Report\n> Merging [#7413](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e50a931c118b9f55f77a743bf703f436bf7a7c29?el=desc) will **decrease** coverage by `0.33%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7413 +/- ##\n==========================================\n- Coverage 77.06% 76.72% -0.34% \n==========================================\n Files 181 181 \n Lines 35781 35781 \n==========================================\n- Hits 27575 27454 -121 \n- Misses 8206 8327 +121 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.10% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.22% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.89%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=footer). Last update [e50a931...e1fa8e9](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | Improving RAG README.
Additionally I'm adding a script creating a standalone RAG checkpoint from a generator and a question encoder checkpoints.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7413/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7413/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7413",
"html_url": "https://github.com/huggingface/transformers/pull/7413",
"diff_url": "https://github.com/huggingface/transformers/pull/7413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7413.patch",
"merged_at": 1601280399000
} |
https://api.github.com/repos/huggingface/transformers/issues/7412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7412/comments | https://api.github.com/repos/huggingface/transformers/issues/7412/events | https://github.com/huggingface/transformers/issues/7412 | 709,749,506 | MDU6SXNzdWU3MDk3NDk1MDY= | 7,412 | Unable to load pipeline for question answering | {
"login": "prince14322",
"id": 19497571,
"node_id": "MDQ6VXNlcjE5NDk3NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19497571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prince14322",
"html_url": "https://github.com/prince14322",
"followers_url": "https://api.github.com/users/prince14322/followers",
"following_url": "https://api.github.com/users/prince14322/following{/other_user}",
"gists_url": "https://api.github.com/users/prince14322/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prince14322/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prince14322/subscriptions",
"organizations_url": "https://api.github.com/users/prince14322/orgs",
"repos_url": "https://api.github.com/users/prince14322/repos",
"events_url": "https://api.github.com/users/prince14322/events{/privacy}",
"received_events_url": "https://api.github.com/users/prince14322/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Do you have internet access in the environment where your script is run? Can you do the following:\r\n\r\n```py\r\nfrom transformers import DistilBertModel\r\n\r\nmodel = DistilBertModel.from_pretrained(\"distilbert-base-cased\")\r\n```\r\n?",
"Sorry my bad.\r\nInternet was off.\r\nIt is working fine.\r\nThank you."
] | 1,601 | 1,601 | 1,601 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-4.19.112+-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
examples/distillation: @VictorSanh
documentation: @sgugger
-->
## Information
Model I am using pipeline for question answering:
The problem arises when using:
* [ ] the official example scripts: (give details below)
from transformers import pipeline
nlp_qa = pipeline('question-answering')
## To reproduce
Steps to reproduce the behavior:
1. Ran the below snippet on kaggle
```
from transformers import pipeline
nlp_qa = pipeline('question-answering')
```
### Error message I got
```
OSError: Can't load config for 'distilbert-base-cased'. Make sure that:
- 'distilbert-base-cased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'distilbert-base-cased' is the correct path to a directory containing a config.json file
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
### Full error
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
242 if resolved_config_file is None:
--> 243 raise EnvironmentError
244 config_dict = cls._dict_from_json_file(resolved_config_file)
OSError:
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-24-a978a087c38f> in <module>
1 from transformers import pipeline
2
----> 3 nlp_qa = pipeline('question-answering') # 1st try
4 # nlp_qa = pipeline('question-answering', model=model, tokenizer = tokenizer, device=torch.cuda.current_device())
/opt/conda/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, **kwargs)
1787 if isinstance(tokenizer, tuple):
1788 # For tuple we have (tokenizer name, {kwargs})
-> 1789 tokenizer = AutoTokenizer.from_pretrained(tokenizer[0], **tokenizer[1])
1790 else:
1791 tokenizer = AutoTokenizer.from_pretrained(tokenizer)
/opt/conda/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
193 config = kwargs.pop("config", None)
194 if not isinstance(config, PretrainedConfig):
--> 195 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
196
197 if "bert-base-japanese" in pretrained_model_name_or_path:
/opt/conda/lib/python3.7/site-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
194
195 """
--> 196 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
197
198 if "model_type" in config_dict:
/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
250 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
251 )
--> 252 raise EnvironmentError(msg)
253
254 except json.JSONDecodeError:
OSError: Can't load config for 'distilbert-base-cased'. Make sure that:
- 'distilbert-base-cased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'distilbert-base-cased' is the correct path to a directory containing a config.json file
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7412/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7411/comments | https://api.github.com/repos/huggingface/transformers/issues/7411/events | https://github.com/huggingface/transformers/issues/7411 | 709,635,467 | MDU6SXNzdWU3MDk2MzU0Njc= | 7,411 | Error: isTensor() INTERNAL ASSERT FAILED from traced RoBERTa model on iOS using LibTorch | {
"login": "jbmaxwell",
"id": 15166432,
"node_id": "MDQ6VXNlcjE1MTY2NDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/15166432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbmaxwell",
"html_url": "https://github.com/jbmaxwell",
"followers_url": "https://api.github.com/users/jbmaxwell/followers",
"following_url": "https://api.github.com/users/jbmaxwell/following{/other_user}",
"gists_url": "https://api.github.com/users/jbmaxwell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbmaxwell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbmaxwell/subscriptions",
"organizations_url": "https://api.github.com/users/jbmaxwell/orgs",
"repos_url": "https://api.github.com/users/jbmaxwell/repos",
"events_url": "https://api.github.com/users/jbmaxwell/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbmaxwell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Ack! The error was actually in my Obj-C++ code, which had `auto outputTensor = _impl.forward({tensor}).toTensor();`... that will have to become `auto outputTuple = _impl.forward({tensor}).toTuple();`. Apologies for the spam, but hopefully this helps someone else some day. I found the hint here: https://github.com/pytorch/pytorch/issues/32039#issuecomment-573167212",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | I've exported RoBERTa from a traced model for running on iOS using LibTorch and I'm getting this error when running prediction in the app: `isTensor() INTERNAL ASSERT FAILED at /Users/distiller/project/aten/src/ATen/core/ivalue_inl.h:86, please report a bug to PyTorch. Expected Tensor but got Tuple (toTensor at /Users/distiller/project/aten/src/ATen/core/ivalue_inl.h:86)`. I'm using the BertTokenizer because I have a small, fixed vocabulary (not natural language), and found It easier to use this vocabulary with the Bert tokenizer (happy to be corrected on this). I can train and test the model without issue in Python.
My conversion code is as follows (it's very possible I've done something wrong here!):
```
tokenizer = BertTokenizer('./data/vocab.txt')
config = RobertaConfig(
vocab_size=858,
max_position_embeddings=258,
num_attention_heads=6,
num_hidden_layers=4,
type_vocab_size=1,
torchscript=True
)
model = RobertaForMaskedLM(config=config).from_pretrained('./trained_RoBERTa')
model.cpu()
model.eval()
example_input = torch.LongTensor(1, 256).random_(0, 857).cpu()
traced_model = torch.jit.trace(model, example_input)
traced_model.save('./exports/trained_RoBERTa.pt')
```
Transformers version: 3.2.0
Ubuntu 18.04
Python 3.7.2
PyTorch 1.5
Cuda 10.2
I should mention that if there's a relatively painless path to using CoreML instead of LibTorch I'd love to hear about it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7411/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7410/comments | https://api.github.com/repos/huggingface/transformers/issues/7410/events | https://github.com/huggingface/transformers/pull/7410 | 709,621,880 | MDExOlB1bGxSZXF1ZXN0NDkzNjM1NzMy | 7,410 | [s2s] rougeLSum expects \n between sentences | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=h1) Report\n> Merging [#7410](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eab5f59682cf197cd5fd19d499b3670dbef67000?el=desc) will **decrease** coverage by `0.87%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7410 +/- ##\n==========================================\n- Coverage 77.77% 76.89% -0.88% \n==========================================\n Files 181 181 \n Lines 35781 35781 \n==========================================\n- Hits 27828 27514 -314 \n- Misses 7953 8267 +314 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `87.04% <0.00%> (+1.03%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/configuration\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `100.00% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.58% <0.00%> (+2.41%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=footer). Last update [eab5f59...ee83da0](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #6808
Continues #7356 from @swethmandava
Coauthor: @swethmandava
+ `add_newline_sep` kwarg controls whether to add newlines between sentences
+ test coverage
+ can pass bootstrap=False to see raw scores, make scoring deterministic.
+ Verified metrics improvement for bart on CNN/Dailymail, no change for XSUM
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7410/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7410",
"html_url": "https://github.com/huggingface/transformers/pull/7410",
"diff_url": "https://github.com/huggingface/transformers/pull/7410.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7410.patch",
"merged_at": 1601238439000
} |
https://api.github.com/repos/huggingface/transformers/issues/7409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7409/comments | https://api.github.com/repos/huggingface/transformers/issues/7409/events | https://github.com/huggingface/transformers/pull/7409 | 709,589,908 | MDExOlB1bGxSZXF1ZXN0NDkzNjEzNTY2 | 7,409 | [T5] allow config.decoder_layers to control decoder size | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=h1) Report\n> Merging [#7409](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2c8ecdf8a87019c438262d8c692e1bdffe05149f?el=desc) will **decrease** coverage by `0.73%`.\n> The diff coverage is `98.05%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7409 +/- ##\n==========================================\n- Coverage 77.58% 76.85% -0.74% \n==========================================\n Files 181 181 \n Lines 35725 35784 +59 \n==========================================\n- Hits 27719 27501 -218 \n- Misses 8006 8283 +277 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <81.81%> (ø)` | |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.55% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `97.35% <100.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `84.33% <100.00%> (+0.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.48% <100.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.43% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `96.62% <100.00%> (+0.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `82.83% <100.00%> (+0.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `97.35% <100.00%> (+0.01%)` | :arrow_up: |\n| ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=footer). Last update [eab5f59...ac38a32](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Renamed it to `num_decoder_layers`, and fixed docstring!\r\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | <!-- This line specifies which issue to close after the pull request is merged. -->
#### Problem
arxiv.org/abs/2006.10369, among others, shows that models with fewer decoder layers than encoder layers can perform well and run generation much faster. Right now it is difficult to do distillation on t5 because there is only `T5Config.num_layers` which controls encoder layers and decoder layers.
#### Solution
- add `config.decoder_layers` to control decoder num layers
- maintain 100% backwards compatibility by defaulting `config.decoder_layers = num_layers`
- add tests
- 4 line PR besides tests+ docs :)
### Testing
- slow t5 tests pass | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7409/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7409",
"html_url": "https://github.com/huggingface/transformers/pull/7409",
"diff_url": "https://github.com/huggingface/transformers/pull/7409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7409.patch",
"merged_at": 1601276884000
} |
https://api.github.com/repos/huggingface/transformers/issues/7408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7408/comments | https://api.github.com/repos/huggingface/transformers/issues/7408/events | https://github.com/huggingface/transformers/issues/7408 | 709,582,299 | MDU6SXNzdWU3MDk1ODIyOTk= | 7,408 | Allow creation of asymmetrical T5 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | https://arxiv.org/abs/2006.10369, among others, shows that models with fewer decoder layers than encoder layers can perform well and run generation much faster. Right now it is difficult to do distillation on t5 because there is only `T5Config.num_layers` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7408/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7407/comments | https://api.github.com/repos/huggingface/transformers/issues/7407/events | https://github.com/huggingface/transformers/issues/7407 | 709,545,205 | MDU6SXNzdWU3MDk1NDUyMDU= | 7,407 | How to train a model based on CTRL | {
"login": "nooralahzadeh",
"id": 1093791,
"node_id": "MDQ6VXNlcjEwOTM3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1093791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nooralahzadeh",
"html_url": "https://github.com/nooralahzadeh",
"followers_url": "https://api.github.com/users/nooralahzadeh/followers",
"following_url": "https://api.github.com/users/nooralahzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/nooralahzadeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nooralahzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nooralahzadeh/subscriptions",
"organizations_url": "https://api.github.com/users/nooralahzadeh/orgs",
"repos_url": "https://api.github.com/users/nooralahzadeh/repos",
"events_url": "https://api.github.com/users/nooralahzadeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/nooralahzadeh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | # ❓ Questions & Help
I am wondering how to train A Conditional Transformer Language Model for Controllable Generation (CTRL)?
Thanks
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I saw that there is a code for text generation based on CTRL, but did not find any for the training phase!
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7407/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7406/comments | https://api.github.com/repos/huggingface/transformers/issues/7406/events | https://github.com/huggingface/transformers/issues/7406 | 709,523,009 | MDU6SXNzdWU3MDk1MjMwMDk= | 7,406 | Bert base chinese model gives error :- EagerTensor object has no attribute 'size' | {
"login": "akanyaani",
"id": 11317416,
"node_id": "MDQ6VXNlcjExMzE3NDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/11317416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akanyaani",
"html_url": "https://github.com/akanyaani",
"followers_url": "https://api.github.com/users/akanyaani/followers",
"following_url": "https://api.github.com/users/akanyaani/following{/other_user}",
"gists_url": "https://api.github.com/users/akanyaani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akanyaani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akanyaani/subscriptions",
"organizations_url": "https://api.github.com/users/akanyaani/orgs",
"repos_url": "https://api.github.com/users/akanyaani/repos",
"events_url": "https://api.github.com/users/akanyaani/events{/privacy}",
"received_events_url": "https://api.github.com/users/akanyaani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You are mixing the Pytorch and tf API. You should have \"return_tensors=\"pt\"\" if you use PyTorch or use TFAutoModel for TensorFlow.",
"@esp32wrangler is correct!"
] | 1,601 | 1,602 | 1,602 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform:
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 3.2.1
- Using GPU in script?:
- Using distributed or parallel set-up in script?: No
## Information
I am just trying to get BERT embedding using the Chinese BERT base as explained in GitHub but getting an error.
Model I am using (Bert, XLNet ...): BERT (Chinese)
## To reproduce
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
model = AutoModel.from_pretrained("bert-base-chinese")
inputs = tokenizer("和 管理 , 发挥 公路", return_tensors="tf")
outputs = model(**inputs)
```
Error
```
AttributeError Traceback (most recent call last)
<ipython-input-30-481c0ebb1173> in <module>
1 inputs = tokenizer("和 管理 , 发挥 公路", return_tensors="tf")
2
----> 3 outputs = model(**inputs)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, return_dict)
789 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
790 elif input_ids is not None:
--> 791 input_shape = input_ids.size()
792 elif inputs_embeds is not None:
793 input_shape = inputs_embeds.size()[:-1]
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'size'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7406/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7405/comments | https://api.github.com/repos/huggingface/transformers/issues/7405/events | https://github.com/huggingface/transformers/pull/7405 | 709,417,863 | MDExOlB1bGxSZXF1ZXN0NDkzNDgxNzI3 | 7,405 | Add summarization support to ONNX conversion | {
"login": "sagarreddypatil",
"id": 16482184,
"node_id": "MDQ6VXNlcjE2NDgyMTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/16482184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sagarreddypatil",
"html_url": "https://github.com/sagarreddypatil",
"followers_url": "https://api.github.com/users/sagarreddypatil/followers",
"following_url": "https://api.github.com/users/sagarreddypatil/following{/other_user}",
"gists_url": "https://api.github.com/users/sagarreddypatil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sagarreddypatil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sagarreddypatil/subscriptions",
"organizations_url": "https://api.github.com/users/sagarreddypatil/orgs",
"repos_url": "https://api.github.com/users/sagarreddypatil/repos",
"events_url": "https://api.github.com/users/sagarreddypatil/events{/privacy}",
"received_events_url": "https://api.github.com/users/sagarreddypatil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=h1) Report\n> Merging [#7405](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e50a931c118b9f55f77a743bf703f436bf7a7c29?el=desc) will **decrease** coverage by `0.61%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7405 +/- ##\n==========================================\n- Coverage 77.06% 76.45% -0.62% \n==========================================\n Files 181 181 \n Lines 35781 35781 \n==========================================\n- Hits 27575 27356 -219 \n- Misses 8206 8425 +219 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.10% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=footer). Last update [e50a931...ee12607](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,608 | 1,608 | NONE | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7404
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7405/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7405",
"html_url": "https://github.com/huggingface/transformers/pull/7405",
"diff_url": "https://github.com/huggingface/transformers/pull/7405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7405.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7404/comments | https://api.github.com/repos/huggingface/transformers/issues/7404/events | https://github.com/huggingface/transformers/issues/7404 | 709,300,069 | MDU6SXNzdWU3MDkzMDAwNjk= | 7,404 | Add support for exporting summarization models to ONNX | {
"login": "sagarreddypatil",
"id": 16482184,
"node_id": "MDQ6VXNlcjE2NDgyMTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/16482184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sagarreddypatil",
"html_url": "https://github.com/sagarreddypatil",
"followers_url": "https://api.github.com/users/sagarreddypatil/followers",
"following_url": "https://api.github.com/users/sagarreddypatil/following{/other_user}",
"gists_url": "https://api.github.com/users/sagarreddypatil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sagarreddypatil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sagarreddypatil/subscriptions",
"organizations_url": "https://api.github.com/users/sagarreddypatil/orgs",
"repos_url": "https://api.github.com/users/sagarreddypatil/repos",
"events_url": "https://api.github.com/users/sagarreddypatil/events{/privacy}",
"received_events_url": "https://api.github.com/users/sagarreddypatil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I just realized that the triu operation was fixed recently, sorry about that. I will create a PR to add summarization to the ONNX conversion script.",
"@sagarreddypatil Did you need to make any other changes to get summarization working with ONNX Runtime? It only appears to work for text with five tokens for me, which I believe it due to <strike>the [dummy inputs](https://huggingface.co/transformers/serialization.html#dummy-inputs-and-standard-lengths)</strike> (edit: looks to be due to the `infer_shapes` method)\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\nimport onnxruntime as rt\r\nimport numpy as np\r\n\r\ntext = \"one two three four five\"\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-large-cnn\")\r\ntokens = tokenizer(text)\r\n\r\ninput = {\r\n 'input_ids': np.array([tokens['input_ids']]),\r\n 'attention_mask': np.array([tokens['attention_mask']])\r\n}\r\n\r\nsess = rt.InferenceSession(\"bart-large-cnn.onnx\")\r\noutput = sess.run(None, input)\r\nprint(output)\r\n```\r\n\r\nOther token counts fail with:\r\n\r\n```text\r\n2020-10-05 16:44:25.884944 [E:onnxruntime:, sequential_executor.cc:318 Execute] Non-zero status code returned while running Reshape node. Name:'Reshape_62' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, std::vector<int64_t> &) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,7}, requested shape:{6}\r\n```",
"I actually did encounter that issue. In my \"hotfix\", I simply added the summarization option to the list, but I believe the implementation of how the ONNX model is made needs to be changed. But yes, it does not work for more than 5 tokens. I am not really sure how to fix that, but you seem to be better with this than I am.",
"Any chance someone was able to solve this issue?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I'm receiving this error \r\n\r\n`Error while converting the model: The type of axis index is expected to be an integer\r\n`\r\n\r\nwhen trying to convert bart-large-cnn\r\n\r\n`python3 -m transformers.convert_graph_to_onnx --model facebook/bart-large-cnn --framework pt bart-large-cnn.onnx\r\n`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> I'm receiving this error\r\n> \r\n> `Error while converting the model: The type of axis index is expected to be an integer `\r\n> \r\n> when trying to convert bart-large-cnn\r\n> \r\n> `python3 -m transformers.convert_graph_to_onnx --model facebook/bart-large-cnn --framework pt bart-large-cnn.onnx `\r\n\r\nI'm encountering the same issue when exporting a gpt2 model using --pipeline text-generation.",
"We're currently working on a rework of the ONNX implementation within Transformers, which is available here: https://github.com/huggingface/transformers/pull/11786\r\n\r\nInstead of offering a script to enable conversions for all models (which was not kept up to date with recent model releases), we're opting for a case-by-case approach, while offering the tools to convert models manually in a straightforward and simple manner; by creating `OnnxConfig` configuration objects to specify the input and output types of each model.\r\n\r\nPlease take a look at the PR and give us your feedback.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,601 | 1,626 | 1,626 | NONE | null | # 🚀 Feature request
Add support for exporting summarization models to ONNX.
## Motivation
I want to serve summarization models on edge, through an ONNX runtime. However, I am unable to convert facebook/bart-large-cnn(using class BartModelForConditionalGeneration) to ONNX as the provided script doesn't support the summarization pipeline, due to PyTorch not being able to export the triu operator to ONNX. However, there are workarounds listed at https://github.com/pytorch/pytorch/issues/32968, and could be used to make this possible.
## Your contribution
I don't really know the internals of PyTorch that well, so I don't think I can make any direct contributions.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7404/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7403/comments | https://api.github.com/repos/huggingface/transformers/issues/7403/events | https://github.com/huggingface/transformers/pull/7403 | 709,228,630 | MDExOlB1bGxSZXF1ZXN0NDkzMzI3NzY0 | 7,403 | [makefile] 10x speed up checking/fixing | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=h1) Report\n> Merging [#7403](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e50a931c118b9f55f77a743bf703f436bf7a7c29?el=desc) will **increase** coverage by `1.83%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7403 +/- ##\n==========================================\n+ Coverage 77.06% 78.90% +1.83% \n==========================================\n Files 181 181 \n Lines 35781 35781 \n==========================================\n+ Hits 27575 28233 +658 \n+ Misses 8206 7548 -658 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.70% <0.00%> (-22.68%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `83.11% <0.00%> (-10.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.33% <0.00%> (-7.58%)` | :arrow_down: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=footer). Last update [e50a931...9c60161](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This looks cool! Can add the file argument to `check_copies` since it's easy (you can do it too if you prefer to have it in one PR) but since `check_copies` is super fast now, it probably won't add much.",
"I meant if you wanted to re-enable the blackify functions that were slowing things down.\r\n",
"I'm waiting to see if there actual use cases for that before re-enabling it as there are some ways to make it faster by rewriting the whole script. It would slow down your fast command by quite a bit if a file like roberta has been changed.",
"Hmm, I haven't thought of the scenario of when a branch gets re-based, since if that is done - currently it will add all the files modified since the branching and not just the files modified by the PR. There must be a way to subtract those changes in the master. I will have to think some more.\r\n\r\nIf we can successfully get that minimal list of files then any developer not working on roberta shouldn't get impacted by its slowdown.\r\n\r\n**edit**: it works just fine with rebasing - it only shows other files if you rebased and haven't committed the change. So all is good here."
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | I present to you an updated `fixup` target which is now super-fast, as it only fixes and validates files that were modified since the branching point, which usually is ~5 out of ~1000. Whoah! Give it a try:
```
make fixup
```
Because of the start up overhead and the 2 customs scripts aren't optimized yet (to check only modified files), currently it's about 10 times faster.
before this PR:
```
time make fixup
real 0m19.272s
user 2m28.253s
sys 0m2.794s
```
after this PR:
```
time make fixup
real 0m2.864s
user 0m2.849s
sys 0m0.778s
```
So what's happening here:
1. `git merge-base --fork-point master` - get a sha of the branching point
2. `git diff --name-only $(git merge-base --fork-point master)` - give us all the filenames that were modified since the branching point (regardless of whether they were staged, and/or pushed or they are still local). The only missing parts would be newly added files that aren't under git yet - shouldn't be a problem though.
3. finally we want to check only specific top-folders so:
`git diff --name-only $(git merge-base --fork-point master) | egrep '^(examples|templates|tests|src|utils)'`
4. now feed that to flake8, black, isort, etc.:
`flake8 $(git diff --name-only $(git merge-base --fork-point master) | egrep '^(examples|templates|tests|src|utils)')`
but only if there were modified files, so there is an `if` check in the Makefile. otherwise if we get no match - it runs wild on the whole repo and reports things we don't care about.
@sgugger, so if you modify `utils/check_copies.py` to optionally get specific filenames - we can unleash its full power. If you are up for it, do the required modifications and I will take care of the Makefile to pass them on.
----
## refactor repeated dir listings
This PR also refactors repeated dirs into a single variable on top of the file
@sgugger, @LysandreJik, @sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7403/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7403/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7403",
"html_url": "https://github.com/huggingface/transformers/pull/7403",
"diff_url": "https://github.com/huggingface/transformers/pull/7403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7403.patch",
"merged_at": 1601304343000
} |
https://api.github.com/repos/huggingface/transformers/issues/7402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7402/comments | https://api.github.com/repos/huggingface/transformers/issues/7402/events | https://github.com/huggingface/transformers/issues/7402 | 709,207,896 | MDU6SXNzdWU3MDkyMDc4OTY= | 7,402 | Tokenizers as an optional dependency | {
"login": "jeanm",
"id": 107696,
"node_id": "MDQ6VXNlcjEwNzY5Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/107696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeanm",
"html_url": "https://github.com/jeanm",
"followers_url": "https://api.github.com/users/jeanm/followers",
"following_url": "https://api.github.com/users/jeanm/following{/other_user}",
"gists_url": "https://api.github.com/users/jeanm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeanm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeanm/subscriptions",
"organizations_url": "https://api.github.com/users/jeanm/orgs",
"repos_url": "https://api.github.com/users/jeanm/repos",
"events_url": "https://api.github.com/users/jeanm/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeanm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sounds reasonable, but – if you're at liberty to share – and out of curiosity, would also like to know why you can't install Rust-built native deps",
"Thanks for the quick response. Rust is generally fine – what's causing issues is specifically the `pyo3` crate, which has a somewhat involved build script which doesn't get along with our build system.",
"Hi @jeanm, I work on the tokenizers library, can you explain how you are unable to use `tokenizers` ? You should never have to \"see\" Rust as we ship prebuilt libraries.\r\nMaybe we are missing a platform we should add so that you don't have to build from source and so you don't have an issue with Rust or Pyo3 ?",
"Only using the python tokenizers may prevent you from running some example scripts and use some additional functionalities of the library in the future though since we plan to rely more and more on the fast alignements tools provided by the Rust tokenizers to make processing simpler and more accurate.\r\n\r\nDo you think you could give us more details on the issue so that we can try to make the tokenizers library compatible with your system?\r\n\r\nHappy to talk further by DM/mail if it's easier for you to give some details, you can ping me by email or twitter/linkedin for instance.",
"Hi @Narsil @thomwolf, thanks for the responses. As a matter of policy (+ technical reasons I unfortunately cannot get into) we have to build all python wheels from source. If it were possible to make `tokenizers` optional without complicating things on your end, we would be perfectly fine with dealing with reduced functionality, as that's still much better than not being able to run the package at all :)"
] | 1,601 | 1,603 | 1,603 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Would it be possible to make `tokenizers` an optional dependency? I see this was already attempted here by @thomwolf: https://github.com/huggingface/transformers/pull/2342.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
For various reasons we are unable to support Rust in our environment. Given that `tokenizers` is a hard dependency, this means we cannot use `transformers` at all. We would be fine with using the non-fast versions of tokenizers as a workaround.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7402/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7402/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7401/comments | https://api.github.com/repos/huggingface/transformers/issues/7401/events | https://github.com/huggingface/transformers/pull/7401 | 709,207,225 | MDExOlB1bGxSZXF1ZXN0NDkzMzA5NzI3 | 7,401 | Catch PyTorch warning when saving/loading scheduler | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=h1) Report\n> Merging [#7401](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e50a931c118b9f55f77a743bf703f436bf7a7c29?el=desc) will **increase** coverage by `2.28%`.\n> The diff coverage is `13.33%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7401 +/- ##\n==========================================\n+ Coverage 77.06% 79.35% +2.28% \n==========================================\n Files 181 181 \n Lines 35781 35793 +12 \n==========================================\n+ Hits 27575 28403 +828 \n+ Misses 8206 7390 -816 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.01% <13.33%> (-0.70%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `17.46% <0.00%> (-81.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |\n| [src/transformers/configuration\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.31% <0.00%> (-10.12%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.22% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.61% <0.00%> (+0.33%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=footer). Last update [e50a931...864dd99](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"worth pushing upstream to `pytorch`?",
"We can certainly file an issue about it but my guess is that they though a warning always passed was fine (since there is no way to know if the user is saving/loading the optimizer with its scheduler).",
"Thanks for this!\r\n\r\nRegarding whether to push upstream to pytorch: \r\nmaybe a solution is to add optional flag to the pytorch save command like optimizer_was_saved. Make it default False. Only if you explicitly mark the param true in your call to save the optimizer will the warning be suppressed. Puts all the onus on the calling user. "
] | 1,601 | 1,602 | 1,601 | COLLABORATOR | null | When saving or loading the scheduler, PyTorch **always** sends a warning to save/load the optimizer state as well (with a typo). We are saving/loading the optimizer state along the scheduler but there is no way to tell that to PyTorch and avoid the annoying warning (and its typo).
This PR fixes that by catching all warnings while loading/saving the scheduler then reissuing the non-expected ones.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7401/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7401",
"html_url": "https://github.com/huggingface/transformers/pull/7401",
"diff_url": "https://github.com/huggingface/transformers/pull/7401.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7401.patch",
"merged_at": 1601295611000
} |
https://api.github.com/repos/huggingface/transformers/issues/7400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7400/comments | https://api.github.com/repos/huggingface/transformers/issues/7400/events | https://github.com/huggingface/transformers/pull/7400 | 709,204,292 | MDExOlB1bGxSZXF1ZXN0NDkzMzA3Mjc4 | 7,400 | remove codecov PR comments | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The page is nice but the data seems wrong -- \r\nhttps://codecov.io/gh/huggingface/transformers/src/master/src/transformers/modeling_layoutlm.py\r\nsays that `__init__` is not covered, but I checked and it is.. in `test_modeling_layoutlm.py`.\r\n\r\nI can also just disable PR comments if that works better for you.\r\n\r\n",
"> Is there another tool we could leverage to give us more reliable information on our code coverage?\r\n\r\nThe tool is not the problem, the problem lies in our test suite being not idempotent. codecov only compares the old coverage to new coverage. It can't give correct coverage if the data it works with is invalid. Garbage in garbage out.\r\n\r\nIf you want an approximate coverage, you can just add `cov: pytest --cov` in Makefile. It has a bunch of formats if you want the report in a particular format. It should be within 98% of correctness based on the current state of the test suite. ",
"I understand the issue. Could we simply disable the PR comments for now @sshleifer, as that's the only pain point?",
"Done. Will merge once checks pass!"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null |
#### Problem
+ @stas00 has tried very hard to get codecov working to no avail in https://github.com/huggingface/transformers/issues/6317
+ Files that are not affected by a PR show changes in coverage.
+ Codecoverage information is rarely/never useful
+ lots of distracting spam emails
+ lots of vertical space that could otherwise be used for reading discussion history.
#### Proposed solution:
The idea of codecov -- to warn people before they introduce untested code -- is good, but the current implementation is worse than nothing, and after significant effort (mostly from @stas00) I think it is time to give up. If we see another tool we like, or manage to reconfigure this one to work well, that's great, but I think that should happen without broken codecov on master.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7400/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7400/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7400",
"html_url": "https://github.com/huggingface/transformers/pull/7400",
"diff_url": "https://github.com/huggingface/transformers/pull/7400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7400.patch",
"merged_at": 1601407004000
} |
https://api.github.com/repos/huggingface/transformers/issues/7399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7399/comments | https://api.github.com/repos/huggingface/transformers/issues/7399/events | https://github.com/huggingface/transformers/pull/7399 | 709,148,607 | MDExOlB1bGxSZXF1ZXN0NDkzMjU3NTY4 | 7,399 | [Rag] fix rag retriever save_pretrained method | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | MEMBER | null | This PR fixes a typo in `RagRetriever`. `generator_tokenizer` was renamed to just `generator` in `RagTokenizer` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7399/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7399",
"html_url": "https://github.com/huggingface/transformers/pull/7399",
"diff_url": "https://github.com/huggingface/transformers/pull/7399.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7399.patch",
"merged_at": 1601056033000
} |
https://api.github.com/repos/huggingface/transformers/issues/7398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7398/comments | https://api.github.com/repos/huggingface/transformers/issues/7398/events | https://github.com/huggingface/transformers/issues/7398 | 709,128,731 | MDU6SXNzdWU3MDkxMjg3MzE= | 7,398 | Uploading/Sharing large models to HuggingFace | {
"login": "MXueguang",
"id": 34487581,
"node_id": "MDQ6VXNlcjM0NDg3NTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/34487581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MXueguang",
"html_url": "https://github.com/MXueguang",
"followers_url": "https://api.github.com/users/MXueguang/followers",
"following_url": "https://api.github.com/users/MXueguang/following{/other_user}",
"gists_url": "https://api.github.com/users/MXueguang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MXueguang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MXueguang/subscriptions",
"organizations_url": "https://api.github.com/users/MXueguang/orgs",
"repos_url": "https://api.github.com/users/MXueguang/repos",
"events_url": "https://api.github.com/users/MXueguang/events{/privacy}",
"received_events_url": "https://api.github.com/users/MXueguang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There is no limit to the file sizes on the model hub, however, for uploads that large and if your connection is even slightly unstable, it can indeed fail.\r\n\r\nIf you have another host (S3 bucket or whatever) you can upload the file to, I can handle `cp`ing it to your namespace on huggingface.co",
"Actually It will abort at the very beginning of uploading process for the large file every time. All my other smaller models could be uploaded smoothly. so I feel it might not be my network issue.\r\nMy `pytorch_model.bin` is about 11G. I tried to use `truncate` to truncate the file size and noticed that I will keep aborting until I `truncate` the file to 5G",
"Btw, we have a public google cloud storage host. Does it work for you if i am still not able to upload the model?",
"I can indeed reproduce. For now, can you upload to a GCS or S3 bucket, post the url here, and I'll cp the file? \r\n\r\nWill take a note to investigate/fix this in the future.",
"```\r\ngs://ron-random/castorini/monot5-3b-med-msmarco/\r\ngs://ron-random/castorini/monot5-3b-msmarco/\r\n```\r\nCould you help us cp these two models to our organization `castorini`\r\n\r\nThank you very much for your help!",
"Here you go: https://huggingface.co/castorini",
"Will close this for now but we are tracking the \"large file upload\" issue internally"
] | 1,601 | 1,601 | 1,601 | NONE | null | Hi,
I am trying to upload a `t5-3b` based model to HuggingFace. The folder to upload has 11G.
When I am uploading, it will gives `'Connection aborted.', BrokenPipeError(32, 'Broken pipe')`.
Does it because the model is too large and there is a limitation? How could I deal with that?
Thank you for your help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7398/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7397/comments | https://api.github.com/repos/huggingface/transformers/issues/7397/events | https://github.com/huggingface/transformers/issues/7397 | 709,106,701 | MDU6SXNzdWU3MDkxMDY3MDE= | 7,397 | Add DistilBERTGeneration comparable to BertGeneration | {
"login": "jsilter",
"id": 603941,
"node_id": "MDQ6VXNlcjYwMzk0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/603941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jsilter",
"html_url": "https://github.com/jsilter",
"followers_url": "https://api.github.com/users/jsilter/followers",
"following_url": "https://api.github.com/users/jsilter/following{/other_user}",
"gists_url": "https://api.github.com/users/jsilter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jsilter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jsilter/subscriptions",
"organizations_url": "https://api.github.com/users/jsilter/orgs",
"repos_url": "https://api.github.com/users/jsilter/repos",
"events_url": "https://api.github.com/users/jsilter/events{/privacy}",
"received_events_url": "https://api.github.com/users/jsilter/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @jsilter - yes we could definitely add a `DistilForCausalLM` model. I think instead of doing something similar to `BertGeneration` it would be easier to just add a `DistilBertForCausalLM` to `modeling_distilbert.py` similar to `BertLMHeadModel` or `RobertaForCausalLM`. This could actually be an interesting `Good Second Issue`. If someone is interested in opening a PR - I'd be more than happy to provide some guidance :-)",
"Hi @patrickvonplaten, I would love to work on this if it is still possible?",
"Hey @KMFODA - yes absolutely :-) Do you want to open a PR? I think we can very analogues to `BertLMHeadModel` add a `DistilBertForCausalLM` model in `modeling_distilbert.py`.",
"Great! Will open up a PR and start adding a `DistilBertForCausalLM` model into `modeling_distilbert.py` and get back to you if I have any issues :)",
"Hi @patrickvonplaten, I've built the `DistilBertForCausalLM` class into `modelling_distilbert.py` and can run it on the example used in both the `BertLMHeadModel` and the `RobertaForCausalLM` and the outputs look fine. Other than this example, are there any other tests I can run to check it's working as expected?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,610 | null | NONE | null | # 🚀 Feature request
I noticed the new `BertGeneration` class, which uses BERT-style models as both encoder and decoder, as well as the more general `EncoderDecoder` class. This is all great stuff! It would also be great to be able to use distilled models. I believe this is possible for the encoder, but for the decoder a language head must be added.
Since DistilBert is implemented as its own model, and not as a BertModel, I don't think it's possible (or at least it's not easy) for the end user to do this. At least not loading pretrained models, since any pretrained model needs to be a type approved by `AutoModelForCausalLM`.
## Motivation
Same motivation as using distilled models in general. Same results at higher speed, this time applied to an `EncoderDecoder` model.
## Your contribution
Happy to be an alpha tester for this feature
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7397/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7396/comments | https://api.github.com/repos/huggingface/transformers/issues/7396/events | https://github.com/huggingface/transformers/issues/7396 | 709,104,275 | MDU6SXNzdWU3MDkxMDQyNzU= | 7,396 | (GPT2) Running out of GPU memory(24G) on WSL2 but not on native linux. | {
"login": "Sheraf1",
"id": 9618331,
"node_id": "MDQ6VXNlcjk2MTgzMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9618331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sheraf1",
"html_url": "https://github.com/Sheraf1",
"followers_url": "https://api.github.com/users/Sheraf1/followers",
"following_url": "https://api.github.com/users/Sheraf1/following{/other_user}",
"gists_url": "https://api.github.com/users/Sheraf1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sheraf1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sheraf1/subscriptions",
"organizations_url": "https://api.github.com/users/Sheraf1/orgs",
"repos_url": "https://api.github.com/users/Sheraf1/repos",
"events_url": "https://api.github.com/users/Sheraf1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sheraf1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I can train the bert-base-multilingual-cased model and its taking almost all my memory (21109MiB / 24576MiB) on WSL2 meanwhile only taking about 8G on native linux..\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: WSL2 Debian
- Python version: 3.7
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
TextGeneration: @TevenLeScao
-->
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
the official example scripts: (give details below)
I'm running the run_language_modeling.py trying to finetune GPT-2
----
On WSL2 i Run out of memory:
```RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 24.00 GiB total capacity; 22.01 GiB already allocated; 342.71 MiB free; 65.09 MiB cached)```
but if i boot a live ubuntu and run the exact same script it works fine.
I'm using all default settings just as in the example doc.
not sure what is it due to and how to fix it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7396/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7395/comments | https://api.github.com/repos/huggingface/transformers/issues/7395/events | https://github.com/huggingface/transformers/pull/7395 | 709,040,581 | MDExOlB1bGxSZXF1ZXN0NDkzMTYwNTUy | 7,395 | [RAG] Remove dependency on `examples/seq2seq` from rag | {
"login": "ola13",
"id": 1528523,
"node_id": "MDQ6VXNlcjE1Mjg1MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1528523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ola13",
"html_url": "https://github.com/ola13",
"followers_url": "https://api.github.com/users/ola13/followers",
"following_url": "https://api.github.com/users/ola13/following{/other_user}",
"gists_url": "https://api.github.com/users/ola13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ola13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ola13/subscriptions",
"organizations_url": "https://api.github.com/users/ola13/orgs",
"repos_url": "https://api.github.com/users/ola13/repos",
"events_url": "https://api.github.com/users/ola13/events{/privacy}",
"received_events_url": "https://api.github.com/users/ola13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=h1) Report\n> Merging [#7395](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cf1c88e0921243e760d306e63a5938e1bac880f3?el=desc) will **increase** coverage by `0.96%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7395 +/- ##\n==========================================\n+ Coverage 76.65% 77.62% +0.96% \n==========================================\n Files 181 181 \n Lines 35728 35728 \n==========================================\n+ Hits 27387 27733 +346 \n+ Misses 8341 7995 -346 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.13% <0.00%> (-15.42%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.36% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.12% <0.00%> (-0.17%)` | :arrow_down: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=footer). Last update [cf1c88e...4601efb](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | We were importing some functionality from `examples/seq2seq`, however, it seems more HugginFace-like and less error-prone to just copy-paste.
Tested by launching evaluation and training runs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7395/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7395/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7395",
"html_url": "https://github.com/huggingface/transformers/pull/7395",
"diff_url": "https://github.com/huggingface/transformers/pull/7395.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7395.patch",
"merged_at": 1601050849000
} |
https://api.github.com/repos/huggingface/transformers/issues/7394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7394/comments | https://api.github.com/repos/huggingface/transformers/issues/7394/events | https://github.com/huggingface/transformers/pull/7394 | 709,038,970 | MDExOlB1bGxSZXF1ZXN0NDkzMTU5MjA5 | 7,394 | Speedup check_copies script | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Whoah! That's blazing fast! Thanks, @sgugger!\r\n\r\nI think that's why `flake8` is slow - it runs black in some slow mode (`black` itself is very fast)\r\n\r\nYou can always add a flag that activates that disabled function, so it's there if needed.",
"This is no longer needed: `--line-length 119 --target-version py35` at https://github.com/huggingface/transformers/blob/90d1545f25b02a05b1581ae7a617db609fece0a0/utils/check_copies.py#L85\r\nit now uses the config file - ensures we only have one place to do this setting.\r\n\r\nAlso, I haven't studied your code, but if it's applicable - skip checking files that haven't changes since last check - should give a huge speed increase, since typically only a few files are touched during a development of a single PR. If if is applicable and I can be of help let me know.\r\n\r\nI wish black/flake8/isort did that too. It makes no sense to re-run the check on files that haven't changed, which is like 99% of files most of the time.",
"No need for a check-since-file-modified approach, use this instead:\r\n```\r\ngit diff --name-only $(git merge-base --fork-point master)\r\n```\r\nas the source of what files to check.\r\n\r\nIt will give you all the files that were modified since the branch was made - yay!\r\n\r\nBut you only want specific sub-folders, so:\r\n\r\n```\r\ngit diff --name-only $(git merge-base --fork-point master) | egrep '^(examples|templates|tests|src|utils)' | tr '\\n' ' '\r\n```\r\n\r\nNow you can unleash whatever checks and it'd be all blazing fast.\r\n\r\nI will post shortly a PR to make flake8 and other checkers rocket-fast! https://github.com/huggingface/transformers/pull/7403\r\n\r\nI will make a function in Makefile which you can use to feed to the check scripts just the modified files. **edit**: See https://github.com/huggingface/transformers/pull/7403 you now have a variable with all the modified files."
] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | Checking the copies by using black was slowing down the script quite a lot, so removing this check makes the script way faster. Removing `blackify` use could make the script less robust though, so leaving the function for now even if we don't use it anymore. If a situation arises where we see the script fail, I can code a (more complex) way of using black that would be fast.
With the two lines removed, the script takes 0.129s on my setup (instead of 18s).
cc @stas00 for information. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7394/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7394/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7394",
"html_url": "https://github.com/huggingface/transformers/pull/7394",
"diff_url": "https://github.com/huggingface/transformers/pull/7394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7394.patch",
"merged_at": 1601048842000
} |
https://api.github.com/repos/huggingface/transformers/issues/7393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7393/comments | https://api.github.com/repos/huggingface/transformers/issues/7393/events | https://github.com/huggingface/transformers/issues/7393 | 709,029,525 | MDU6SXNzdWU3MDkwMjk1MjU= | 7,393 | [trainer] Training from scratch | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sshleifer Definitely possible, except we'll need to use `AutoModelForSeq2SeqLM` 😉\r\nWe can also pass a different `config` if we don't want to use `pretrained config` using `config_name` argument.\r\n\r\nHappy to open a PR if it's needed :). Let me know",
"@sgugger is this possible with existing `Trainer? It seems like Seq2Seq is the wrong level for this feature to be implemented.",
"Models are initialised in example scripts rather than `Trainer`. Currently we need to save a from scratch model and then pass that. IMO it makes sense to add the `from_scratch` argument to `TrainingArguments` but each examples scripts will need to handle this itself\r\n",
"Oh that's a reasonable workaround. \r\n\r\n```\r\ndef save_randomly_initialized_version(config_name, save_dir, **config_kwargs):\r\n cfg = AutoConfig.from_pretrained(config_name, **config_kwargs)\r\n model = AutoModelForSeq2SeqLM.from_config(cfg)\r\n model.save_pretrained(save_dir)\r\n AutoTokenizer.from_pretrained(config_name).save_pretrained(save_dir)\r\n```\r\nI'll put this in the make_student PR.\r\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | @patil-suraj is this possible in the new `Seq2SeqTrainer`?
Possible solution sketch:
Where we call:
```
AutoSeq2SeqModelWithLMHead.from_pretrained(model_name)
```
Switch to
```
if args.from_scratch: model = AutoSeq2SeqModelWithLMHead(config)
else: model = AutoSeq2SeqModelWithLMHead.from_pretrained(model_name)
```
What do you think?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7393/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7392/comments | https://api.github.com/repos/huggingface/transformers/issues/7392/events | https://github.com/huggingface/transformers/pull/7392 | 709,026,977 | MDExOlB1bGxSZXF1ZXN0NDkzMTQ5MjA1 | 7,392 | Pull request template | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | MEMBER | null | The goal of this PR is to complete the existing pull request template with some additional information, some useful comments for the contributor, as well as the helpful tagging suggestions that already exist in the issue template.
co-authored-by: sgugger <[email protected]>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7392/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7392",
"html_url": "https://github.com/huggingface/transformers/pull/7392",
"diff_url": "https://github.com/huggingface/transformers/pull/7392.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7392.patch",
"merged_at": 1601301109000
} |
https://api.github.com/repos/huggingface/transformers/issues/7391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7391/comments | https://api.github.com/repos/huggingface/transformers/issues/7391/events | https://github.com/huggingface/transformers/pull/7391 | 708,981,146 | MDExOlB1bGxSZXF1ZXN0NDkzMTEwOTE3 | 7,391 | Remove unhelpful bart warning | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=h1) Report\n> Merging [#7391](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cf1c88e0921243e760d306e63a5938e1bac880f3?el=desc) will **decrease** coverage by `0.25%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7391 +/- ##\n==========================================\n- Coverage 76.65% 76.40% -0.26% \n==========================================\n Files 181 181 \n Lines 35728 35726 -2 \n==========================================\n- Hits 27387 27296 -91 \n- Misses 8341 8430 +89 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.10% <ø> (-0.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=footer). Last update [cf1c88e...6779ded](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | This gets hit at the first step of generate. My bad.
The CI Failures are spurious. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7391/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7391",
"html_url": "https://github.com/huggingface/transformers/pull/7391",
"diff_url": "https://github.com/huggingface/transformers/pull/7391.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7391.patch",
"merged_at": 1601046068000
} |
https://api.github.com/repos/huggingface/transformers/issues/7390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7390/comments | https://api.github.com/repos/huggingface/transformers/issues/7390/events | https://github.com/huggingface/transformers/pull/7390 | 708,955,432 | MDExOlB1bGxSZXF1ZXN0NDkzMDg4OTky | 7,390 | Fix BartModel output documentation | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=h1) Report\n> Merging [#7390](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/571c7a11c17bd00ba3e79f4d853cc51428a14e45?el=desc) will **decrease** coverage by `0.76%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7390 +/- ##\n==========================================\n- Coverage 77.64% 76.87% -0.77% \n==========================================\n Files 181 181 \n Lines 35722 35722 \n==========================================\n- Hits 27736 27461 -275 \n- Misses 7986 8261 +275 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.12% <ø> (ø)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.01% <0.00%> (+0.64%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `96.55% <0.00%> (+15.41%)` | :arrow_up: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `75.00% <0.00%> (+20.83%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=footer). Last update [571c7a1...5603f56](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | As mentioned in #7380, the output documented for `BartModel` was wrong. This PR should fix this.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7390/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7390",
"html_url": "https://github.com/huggingface/transformers/pull/7390",
"diff_url": "https://github.com/huggingface/transformers/pull/7390.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7390.patch",
"merged_at": 1601048893000
} |
https://api.github.com/repos/huggingface/transformers/issues/7389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7389/comments | https://api.github.com/repos/huggingface/transformers/issues/7389/events | https://github.com/huggingface/transformers/issues/7389 | 708,925,892 | MDU6SXNzdWU3MDg5MjU4OTI= | 7,389 | Custom preprocessing of text | {
"login": "datistiquo",
"id": 47474379,
"node_id": "MDQ6VXNlcjQ3NDc0Mzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/47474379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datistiquo",
"html_url": "https://github.com/datistiquo",
"followers_url": "https://api.github.com/users/datistiquo/followers",
"following_url": "https://api.github.com/users/datistiquo/following{/other_user}",
"gists_url": "https://api.github.com/users/datistiquo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datistiquo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datistiquo/subscriptions",
"organizations_url": "https://api.github.com/users/datistiquo/orgs",
"repos_url": "https://api.github.com/users/datistiquo/repos",
"events_url": "https://api.github.com/users/datistiquo/events{/privacy}",
"received_events_url": "https://api.github.com/users/datistiquo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"It shouldn't be necessary as it performs byte pair encoding (BPE) when a word isn't in it's vocabulary. For example \"Gallbladder palpaple\". Palpaple isn't in my vocabulary so it breaks the word into many partial words that are in the vocabulary as: ['p', '##al', '##pa', '##ple']. This would match variations that would otherwise need to be stemmed or converted to it's lemma.\r\n\r\nThis will however be an issue if you are using the model to perform cosine similarly. The results are terrible when you have many words out of vocabulary. "
] | 1,601 | 1,666 | 1,607 | NONE | null | I feel like this a silly question. But I just thought in using BERT, wait, before working with fasttext for example I had to do preprocessing like word stemming/lemmatization and stopwords. How is the advise for using BERT models?
Does it makes problems if I do stemming or lemmatization before feeding to BERT tokenizer?
questions over questions... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7389/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7389/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7388/comments | https://api.github.com/repos/huggingface/transformers/issues/7388/events | https://github.com/huggingface/transformers/pull/7388 | 708,925,675 | MDExOlB1bGxSZXF1ZXN0NDkzMDYzMzc0 | 7,388 | Update LayoutLM doc | {
"login": "av-maslov",
"id": 71869629,
"node_id": "MDQ6VXNlcjcxODY5NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/71869629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/av-maslov",
"html_url": "https://github.com/av-maslov",
"followers_url": "https://api.github.com/users/av-maslov/followers",
"following_url": "https://api.github.com/users/av-maslov/following{/other_user}",
"gists_url": "https://api.github.com/users/av-maslov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/av-maslov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/av-maslov/subscriptions",
"organizations_url": "https://api.github.com/users/av-maslov/orgs",
"repos_url": "https://api.github.com/users/av-maslov/repos",
"events_url": "https://api.github.com/users/av-maslov/events{/privacy}",
"received_events_url": "https://api.github.com/users/av-maslov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=h1) Report\n> Merging [#7388](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e68d075a4100906509170498480823e7e61874a?el=desc) will **decrease** coverage by `2.58%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7388 +/- ##\n==========================================\n- Coverage 79.33% 76.75% -2.59% \n==========================================\n Files 181 181 \n Lines 35759 35759 \n==========================================\n- Hits 28371 27447 -924 \n- Misses 7388 8312 +924 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-39.79%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.13% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (+6.76%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.31% <0.00%> (+12.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `96.55% <0.00%> (+15.41%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=footer). Last update [9e68d07...9d4e5fc](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | Minor update to model_doc/layoutlm.rs
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger @julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7388/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7388",
"html_url": "https://github.com/huggingface/transformers/pull/7388",
"diff_url": "https://github.com/huggingface/transformers/pull/7388.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7388.patch",
"merged_at": 1601557903000
} |
https://api.github.com/repos/huggingface/transformers/issues/7387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7387/comments | https://api.github.com/repos/huggingface/transformers/issues/7387/events | https://github.com/huggingface/transformers/pull/7387 | 708,837,432 | MDExOlB1bGxSZXF1ZXN0NDkyOTg4OTA2 | 7,387 | Fix tokenization in SQuAD for RoBERTa, Longformer, BART | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=h1) Report\n> Merging [#7387](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2dd652d757132d97e43173fb048849685ecccb68?el=desc) will **increase** coverage by `2.39%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7387 +/- ##\n==========================================\n+ Coverage 76.92% 79.32% +2.39% \n==========================================\n Files 181 181 \n Lines 35721 35726 +5 \n==========================================\n+ Hits 27480 28339 +859 \n+ Misses 8241 7387 -854 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.61% <50.00%> (+0.47%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `78.81% <0.00%> (-12.50%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `83.11% <0.00%> (-10.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.33% <0.00%> (-7.31%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-6.27%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `90.47% <0.00%> (-1.37%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=footer). Last update [2dd652d...a3b11d3](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Also pinging @mfuntowicz here",
"@mfuntowicz @sgugger Is there anything else you want to tackle before merging? "
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | Originating from this discussion: https://github.com/huggingface/transformers/pull/4615#issuecomment-697725357
**Issue:**
Tokenization of context in `squad_convert_example_to_features()` for RoBERTA-like tokenizers is not preserving whitespace, because we call the tokenizer on previously splitted, individual words.
**Example:**
Q = Who was Jim Henson?
Context = Jim Henson was a nice puppet
Expected Tokens: ['< s>', 'who', 'Ġwas', 'Ġj', 'im', 'Ġhen', 'son', '?', '</s>', '</s>', 'Ġj', 'im', 'Ġhen', 'son', 'Ġwas', 'Ġa', 'Ġnice', 'Ġpuppet', '</s>']
Actual Tokens: ['< s>', 'who', 'Ġwas', 'Ġj', 'im', 'Ġhen', 'son', '?', '</s>', '</s>', 'j', 'im', 'hen', 'son', 'was', 'a', 'nice', 'p', 'uppet', '</s>']
Decoded string: Who was Jim Henson?JimHensonwasanicepuppet
**Why a problem?**
- Inconsistency: The question gets tokenized incl. whitespace while the context doesn't. If we have the same word in question and context, we will encode them to different ids.
- Model performance: Eval metrics of `deepset/roberta-base-squad2` on SQuAD 2 dev are significantly lower than originally (F1: 69.6 vs. 81.7). After this fix, it's back to normal (F1: 81.91).
Evalated via:
```
run_squad.py \
--model_type roberta \
--model_name_or_path deepset/roberta-base-squad2 \
--output_dir results/deepset-roberta-base-squad2 \
--data_dir . \
--predict_file dev-v2.0.json \
--do_eval \
--version_2_with_negative \
--per_gpu_eval_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--seed 42 \
--threads 12 \
```
**Fix:**
Enable `add_prefix_space` for RoBERTa-like tokenizers
**Limitations:**
- not the most elegant solution
- not sure if there are more tokenizers with similar behavior that we should add
**Related to:**
https://github.com/huggingface/transformers/issues/7249
@patrickvonplaten @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7387/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7387/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7387",
"html_url": "https://github.com/huggingface/transformers/pull/7387",
"diff_url": "https://github.com/huggingface/transformers/pull/7387.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7387.patch",
"merged_at": 1601894054000
} |
https://api.github.com/repos/huggingface/transformers/issues/7386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7386/comments | https://api.github.com/repos/huggingface/transformers/issues/7386/events | https://github.com/huggingface/transformers/pull/7386 | 708,788,678 | MDExOlB1bGxSZXF1ZXN0NDkyOTQ4NTY0 | 7,386 | [Rag] Fix wrong usage of `num_beams` and `bos_token_id` in Rag Sequence generation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=h1) Report\n> Merging [#7386](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8d3bb781ee2643ad1076f4cbcc6f417245671e94?el=desc) will **increase** coverage by `2.51%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7386 +/- ##\n==========================================\n+ Coverage 76.61% 79.12% +2.51% \n==========================================\n Files 181 181 \n Lines 35759 35760 +1 \n==========================================\n+ Hits 27395 28295 +900 \n+ Misses 8364 7465 -899 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.32% <0.00%> (-51.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `65.26% <0.00%> (-33.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/configuration\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `97.77% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.62% <0.00%> (-1.41%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=footer). Last update [8d3bb78...10026f3](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | MEMBER | null | Small changes => big impact. Hopefully e2e results are better now @ola13 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7386/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7386/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7386",
"html_url": "https://github.com/huggingface/transformers/pull/7386",
"diff_url": "https://github.com/huggingface/transformers/pull/7386.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7386.patch",
"merged_at": 1601037350000
} |
https://api.github.com/repos/huggingface/transformers/issues/7385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7385/comments | https://api.github.com/repos/huggingface/transformers/issues/7385/events | https://github.com/huggingface/transformers/pull/7385 | 708,785,764 | MDExOlB1bGxSZXF1ZXN0NDkyOTQ2MTQ2 | 7,385 | [s2s, examples] minor doc changes | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=h1) Report\n> Merging [#7385](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7cdd9da5bf28c53c214e22d082dd62032f9b00fc?el=desc) will **decrease** coverage by `0.61%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7385 +/- ##\n==========================================\n- Coverage 77.57% 76.96% -0.62% \n==========================================\n Files 181 181 \n Lines 35721 35721 \n==========================================\n- Hits 27712 27492 -220 \n- Misses 8009 8229 +220 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.12% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `87.64% <0.00%> (+0.37%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.50%)` | :arrow_up: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=footer). Last update [7cdd9da...7b4f617](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Yay! Thanks, cc @sshleifer "
] | 1,601 | 1,601 | 1,601 | MEMBER | null | Updates `The Big Table of Tasks`, and note about `fp16` with torch 1.6 for `Seq2SeqTrainer`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7385/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7385/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7385",
"html_url": "https://github.com/huggingface/transformers/pull/7385",
"diff_url": "https://github.com/huggingface/transformers/pull/7385.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7385.patch",
"merged_at": 1601035237000
} |
https://api.github.com/repos/huggingface/transformers/issues/7384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7384/comments | https://api.github.com/repos/huggingface/transformers/issues/7384/events | https://github.com/huggingface/transformers/pull/7384 | 708,755,071 | MDExOlB1bGxSZXF1ZXN0NDkyOTIxODg4 | 7,384 | Flos fix | {
"login": "marrrcin",
"id": 6958772,
"node_id": "MDQ6VXNlcjY5NTg3NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6958772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marrrcin",
"html_url": "https://github.com/marrrcin",
"followers_url": "https://api.github.com/users/marrrcin/followers",
"following_url": "https://api.github.com/users/marrrcin/following{/other_user}",
"gists_url": "https://api.github.com/users/marrrcin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marrrcin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marrrcin/subscriptions",
"organizations_url": "https://api.github.com/users/marrrcin/orgs",
"repos_url": "https://api.github.com/users/marrrcin/repos",
"events_url": "https://api.github.com/users/marrrcin/events{/privacy}",
"received_events_url": "https://api.github.com/users/marrrcin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please merge at will, as this fix is blocking us (https://github.com/huggingface/transformers/issues/7146#issuecomment-698852274). "
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7146
This basically unwraps the model that is used during training and can be either plain `Module` or `DataParallel`/`DistributedDataParallel`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7384/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7384",
"html_url": "https://github.com/huggingface/transformers/pull/7384",
"diff_url": "https://github.com/huggingface/transformers/pull/7384.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7384.patch",
"merged_at": 1601280567000
} |
https://api.github.com/repos/huggingface/transformers/issues/7383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7383/comments | https://api.github.com/repos/huggingface/transformers/issues/7383/events | https://github.com/huggingface/transformers/issues/7383 | 708,752,159 | MDU6SXNzdWU3MDg3NTIxNTk= | 7,383 | Missing keys when loading weights in TF are not useful | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Fixed in #7422 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | MEMBER | null | ## This concerns all TF models
If one loads weights of a tensorflow model these lines are run:
https://github.com/huggingface/transformers/blob/9e68d075a4100906509170498480823e7e61874a/src/transformers/modeling_tf_utils.py#L627
to check which layers are in the model weights file and which layer names of the model are actually loaded.
The problem is that these layer names consists only of the "highest" layer names of a model. *E.g.* for *TFBertForMaskedLM*, these layer names are just:
"bert" and "mlm",
but a name for each weight as it should be.
See:
https://github.com/huggingface/transformers/blob/3c6bf8998fb6ca5aca063fed2543b7176883b004/src/transformers/modeling_tf_bert.py#L865
So the missing keys argument for tensorflow will only capture the most "high" level missing weights.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7383/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7382/comments | https://api.github.com/repos/huggingface/transformers/issues/7382/events | https://github.com/huggingface/transformers/pull/7382 | 708,714,086 | MDExOlB1bGxSZXF1ZXN0NDkyODkwOTY1 | 7,382 | [RAG] Add missing doc and attention_mask to rag | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | MEMBER | null | Adds docs to the newly added `attention_mask` (hope Sylvain is not gonna be too mad that I forgot!) and corrects evaluation for RAG fine-tuning. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7382/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7382/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7382",
"html_url": "https://github.com/huggingface/transformers/pull/7382",
"diff_url": "https://github.com/huggingface/transformers/pull/7382.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7382.patch",
"merged_at": 1601025836000
} |
https://api.github.com/repos/huggingface/transformers/issues/7381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7381/comments | https://api.github.com/repos/huggingface/transformers/issues/7381/events | https://github.com/huggingface/transformers/pull/7381 | 708,571,113 | MDExOlB1bGxSZXF1ZXN0NDkyNzc5NjYy | 7,381 | modeling_bart: 3 small cleanups that dont change outputs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=h1) Report\n> Merging [#7381](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ccb6f5c6da9e703766e8053581fddfc6dcc71a9?el=desc) will **decrease** coverage by `1.41%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7381 +/- ##\n==========================================\n- Coverage 78.20% 76.78% -1.42% \n==========================================\n Files 181 181 \n Lines 35751 35753 +2 \n==========================================\n- Hits 27959 27454 -505 \n- Misses 7792 8299 +507 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.12% <100.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-39.79%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.36% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=footer). Last update [0ccb6f5...3c0b8e3](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,601 | 1,601 | CONTRIBUTOR | null | + Fixes #6259
+ allows a better diff if the mbart integration test breaks
+ raises a Warning in the classic "use cache when call forward" mixup (test_benchmark triggers this warning).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7381/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7381",
"html_url": "https://github.com/huggingface/transformers/pull/7381",
"diff_url": "https://github.com/huggingface/transformers/pull/7381.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7381.patch",
"merged_at": 1601022254000
} |
https://api.github.com/repos/huggingface/transformers/issues/7380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7380/comments | https://api.github.com/repos/huggingface/transformers/issues/7380/events | https://github.com/huggingface/transformers/issues/7380 | 708,519,608 | MDU6SXNzdWU3MDg1MTk2MDg= | 7,380 | Incorrect output fields names in docs | {
"login": "visheratin",
"id": 3251552,
"node_id": "MDQ6VXNlcjMyNTE1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3251552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/visheratin",
"html_url": "https://github.com/visheratin",
"followers_url": "https://api.github.com/users/visheratin/followers",
"following_url": "https://api.github.com/users/visheratin/following{/other_user}",
"gists_url": "https://api.github.com/users/visheratin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/visheratin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/visheratin/subscriptions",
"organizations_url": "https://api.github.com/users/visheratin/orgs",
"repos_url": "https://api.github.com/users/visheratin/repos",
"events_url": "https://api.github.com/users/visheratin/events{/privacy}",
"received_events_url": "https://api.github.com/users/visheratin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"Actually, the root of the problem might be related to the fact that the documentation states that the forward pass returns `BaseModelOutputWithPast` but in fact in returns `Seq2SeqModelOutput` ([source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L947-L955)).",
"Thanks for flagging! This should be fixed once the PR above is merged.",
"Solved by #7390"
] | 1,600 | 1,601 | 1,601 | NONE | null | ## Environment info
- `transformers` version: 3.2.0
- Platform: Linux-5.4.0-7642-generic-x86_64-with-glibc2.29
- Python version: 3.8.2
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@sgugger
## Information
The model I am using (Bert, XLNet ...): Bart
The problem arises when using the official example scripts.
```Python
from transformers import BartModel, BartTokenizer
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
model = BartModel.from_pretrained('facebook/bart-base', return_dict=True,
output_hidden_states=True, output_attentions=True)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
print(outputs.hidden_states)
```
This script results in the error `'Seq2SeqModelOutput' object has no attribute 'hidden_states'`.
## To reproduce
Steps to reproduce the behavior:
1. Run the script above.
## Expected behavior
My expectation was to get a set of hidden states for the model. But in fact, the model returns two sets of hidden states - one for the decoder and another one for the encoder. It can be observed by looking at the keys of the `outputs`:
```Python
>>> print(outputs.keys())
odict_keys(['last_hidden_state', 'decoder_hidden_states', 'encoder_last_hidden_state', 'encoder_hidden_states'])
```
The same is valid for attentions if I specify `output_attentions=True`:
```Python
>>> print(outputs.keys())
odict_keys(['last_hidden_state', 'decoder_hidden_states', 'decoder_attentions', 'encoder_last_hidden_state', 'encoder_hidden_states', 'encoder_attentions'])
```
My conclusion is that the documentation gives an incorrect description of the output fields. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7380/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7379/comments | https://api.github.com/repos/huggingface/transformers/issues/7379/events | https://github.com/huggingface/transformers/issues/7379 | 708,508,033 | MDU6SXNzdWU3MDg1MDgwMzM= | 7,379 | Movement Pruning for GPT2 | {
"login": "snaik2016",
"id": 18183245,
"node_id": "MDQ6VXNlcjE4MTgzMjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/18183245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snaik2016",
"html_url": "https://github.com/snaik2016",
"followers_url": "https://api.github.com/users/snaik2016/followers",
"following_url": "https://api.github.com/users/snaik2016/following{/other_user}",
"gists_url": "https://api.github.com/users/snaik2016/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snaik2016/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snaik2016/subscriptions",
"organizations_url": "https://api.github.com/users/snaik2016/orgs",
"repos_url": "https://api.github.com/users/snaik2016/repos",
"events_url": "https://api.github.com/users/snaik2016/events{/privacy}",
"received_events_url": "https://api.github.com/users/snaik2016/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,607 | 1,607 | NONE | null | # ❓ Questions & Help
Is it possible to make the movement pruning work for GPT2 model?
Principally it should work as it is, did anyone try it and can we have it in examples?
Thanks
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7379/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7378/comments | https://api.github.com/repos/huggingface/transformers/issues/7378/events | https://github.com/huggingface/transformers/issues/7378 | 708,492,726 | MDU6SXNzdWU3MDg0OTI3MjY= | 7,378 | how to customize the position encoding | {
"login": "FTD007",
"id": 14077015,
"node_id": "MDQ6VXNlcjE0MDc3MDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/14077015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FTD007",
"html_url": "https://github.com/FTD007",
"followers_url": "https://api.github.com/users/FTD007/followers",
"following_url": "https://api.github.com/users/FTD007/following{/other_user}",
"gists_url": "https://api.github.com/users/FTD007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FTD007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FTD007/subscriptions",
"organizations_url": "https://api.github.com/users/FTD007/orgs",
"repos_url": "https://api.github.com/users/FTD007/repos",
"events_url": "https://api.github.com/users/FTD007/events{/privacy}",
"received_events_url": "https://api.github.com/users/FTD007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,607 | 1,607 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I want to add in non sequence encoding to pre-train a model. Could anyone please point me where should I look at?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7378/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7377/comments | https://api.github.com/repos/huggingface/transformers/issues/7377/events | https://github.com/huggingface/transformers/pull/7377 | 708,477,273 | MDExOlB1bGxSZXF1ZXN0NDkyNzAxNzQ3 | 7,377 | Document RAG again | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=h1) Report\n> Merging [#7377](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eadd870b2f503047dd81b8dcd9d115dc1b4a9196?el=desc) will **increase** coverage by `0.75%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7377 +/- ##\n==========================================\n+ Coverage 77.99% 78.75% +0.75% \n==========================================\n Files 181 181 \n Lines 35759 35759 \n==========================================\n+ Hits 27891 28161 +270 \n+ Misses 7868 7598 -270 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `22.08% <0.00%> (-75.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-39.79%)` | :arrow_down: |\n| [src/transformers/configuration\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (+0.55%)` | :arrow_up: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=footer). Last update [a8e7982...52fbf34](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,601 | 1,601 | COLLABORATOR | null | Do not merge before Monday
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7377/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7377/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7377",
"html_url": "https://github.com/huggingface/transformers/pull/7377",
"diff_url": "https://github.com/huggingface/transformers/pull/7377.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7377.patch",
"merged_at": 1601296307000
} |
https://api.github.com/repos/huggingface/transformers/issues/7376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7376/comments | https://api.github.com/repos/huggingface/transformers/issues/7376/events | https://github.com/huggingface/transformers/pull/7376 | 708,466,440 | MDExOlB1bGxSZXF1ZXN0NDkyNjkyNjk0 | 7,376 | Remove mentions of RAG from the docs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,600 | 1,600 | 1,600 | COLLABORATOR | null | You haven't seen anything. Those are not the droids you are looking for. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7376/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 3,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7376/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7376",
"html_url": "https://github.com/huggingface/transformers/pull/7376",
"diff_url": "https://github.com/huggingface/transformers/pull/7376.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7376.patch",
"merged_at": 1600981636000
} |
https://api.github.com/repos/huggingface/transformers/issues/7375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7375/comments | https://api.github.com/repos/huggingface/transformers/issues/7375/events | https://github.com/huggingface/transformers/issues/7375 | 708,462,148 | MDU6SXNzdWU3MDg0NjIxNDg= | 7,375 | CUDA out of memory error for Bert Model | {
"login": "Backpackerice",
"id": 7083541,
"node_id": "MDQ6VXNlcjcwODM1NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7083541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Backpackerice",
"html_url": "https://github.com/Backpackerice",
"followers_url": "https://api.github.com/users/Backpackerice/followers",
"following_url": "https://api.github.com/users/Backpackerice/following{/other_user}",
"gists_url": "https://api.github.com/users/Backpackerice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Backpackerice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Backpackerice/subscriptions",
"organizations_url": "https://api.github.com/users/Backpackerice/orgs",
"repos_url": "https://api.github.com/users/Backpackerice/repos",
"events_url": "https://api.github.com/users/Backpackerice/events{/privacy}",
"received_events_url": "https://api.github.com/users/Backpackerice/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"I agree, I had a stable training pipeline for training on TPU and suddenly it broke because it ran out of memory when using the newer versions of Huggingface. I am using the Trainer class. For me the crash happens either during the first evaluation step or right after it.",
"Also because the Trainer is such a critical code that will be used in production systems in companies and various research labs, it is very important that the Trainer code is stable and is well tested for correctness, performance (iterations/sec) and memory use for training and evaluation. The tests should also cover the various devices it supports, i.e. CPU, GPU and TPU.\r\n\r\nIt would be great if these tests could run every time a change is made in the trainer code, so that we have confidence that the Trainer is stable. Over the last 3 months I have seen a lot of bugs popping into huggingface master and trying to debug Trainer bugs is very unproductive for Huggingface's users.",
"The commit id where I do not see an increase in device memory for Trainer 8fcbe486e1592321e868f872545c8fd9d359a515 . I have reverted back to this commit id and my training pipeline works again.",
"I think whats happening is something changed in the Trainer code and now it suddenly takes a bit more memory. Because most of the people select the training batch size = 1 less than when they start seeing memory failures, the setup becomes extra sensitive to any increase in memory used by the Trainer.\r\nWith the current master, I tried training with a lower batch size and it trained properly. Although, I lose convergence speed because I process less examples and the iterations/seconds remain almost the same as the larger batch size.\r\nI would rather revert to the older commit than train with smaller batch sizes.",
"Hi @Backpackerice \r\nWould you mind sharing your code? It's hard to investigate a leak with just a general statement.",
"> Hi @Backpackerice\r\n> Would you mind sharing your code? It's hard to investigate a leak with just a general statement.\r\n\r\nPlease find below my code:\r\nTo explain a little bit, this is trying to run a dual bert - two different two inputs (with attention or concat method). But when I ran into this Cuda issues, I was only using text input from review text (not agent text). \r\n` \r\n\r\nclass ReviewClassification(BertPreTrainedModel):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n self.num_labels = 2\r\n\r\n self.bert = BertModel(config)\r\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\r\n\r\n embedding_size = config.hidden_size\r\n\r\n self.classifier = nn.Linear(embedding_size, len(LABEL_NAME))\r\n self.init_weights()\r\n\r\n def forward(\r\n self,\r\n review_input_ids=None,\r\n review_attention_mask=None,\r\n review_token_type_ids=None,\r\n agent_input_ids=None,\r\n agent_attention_mask=None,\r\n agent_token_type_ids=None,\r\n labels=None,\r\n ):\r\n\r\n review_outputs = self.bert(\r\n review_input_ids,\r\n attention_mask=review_attention_mask,\r\n token_type_ids=review_token_type_ids,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n )\r\n \r\n feature = review_outputs[1] \r\n logits = self.classifier(feature)\r\n outputs = (logits,) # + outputs[2:] # add hidden states and attention if they are here\r\n\r\n if labels is not None:\r\n pos_weight=torch.tensor(8.85) # N_negative/N_positive from entire training set\r\n loss_fct = nn.BCEWithLogitsLoss(pos_weight=pos_weight).cuda()\r\n loss = loss_fct(logits, labels)\r\n outputs = (loss,) + outputs\r\n return outputs # (loss, logits, hidden_states, attentions) `\r\n",
"> I think whats happening is something changed in the Trainer code and now it suddenly takes a bit more memory. Because most of the people select the training batch size = 1 less than when they start seeing memory failures, the setup becomes extra sensitive to any increase in memory used by the Trainer.\r\n> With the current master, I tried training with a lower batch size and it trained properly. Although, I lose convergence speed because I process less examples and the iterations/seconds remain almost the same as the larger batch size.\r\n> I would rather revert to the older commit than train with smaller batch sizes.\r\n\r\nCurrently I temporarily solve the issues but creating a new SageMaker. Seems like on the old SageMaker, some phantom python processes were hogging the GPU cards. Also it was acting wired even with the same setting, it will provides significantly different results. ",
"I encountered similar problems. What I did was to uninstall the latest version of transformers (v3.4.0) and install v3.1.0 instead. My code works fine with the old version.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,610 | 1,610 | NONE | null | Hi there,
I am building a BERT binary classification on SageMaker using Pytorch. Previously when I ran the model, I set the Batch size to 16 and the model were able to run successfully. However, yesterday after I stopped SageMaker and restarted the this morning, I can't run the model with Batch size as 16 any more. I am able to run the model with batch size 8. However, the model is not producing the same result (of course). I didn't change anything else in between. All other settings are the same. (Except I change the SageMaker volume from 30GB to 200GB.)
Does anyone know what may cause this problem? I really want to reproduce the result with batch size 16.
Any answers will help and thank you in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7375/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7375/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7374/comments | https://api.github.com/repos/huggingface/transformers/issues/7374/events | https://github.com/huggingface/transformers/pull/7374 | 708,425,036 | MDExOlB1bGxSZXF1ZXN0NDkyNjU4ODM4 | 7,374 | Fix FP16 and attention masks in FunnelTransformer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=h1) Report\n> Merging [#7374](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ccb6f5c6da9e703766e8053581fddfc6dcc71a9?el=desc) will **decrease** coverage by `1.43%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7374 +/- ##\n==========================================\n- Coverage 78.20% 76.76% -1.44% \n==========================================\n Files 181 181 \n Lines 35751 35750 -1 \n==========================================\n- Hits 27959 27444 -515 \n- Misses 7792 8306 +514 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mdW5uZWwucHk=) | `86.72% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `94.04% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-39.79%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.36% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=footer). Last update [0ccb6f5...705ee7a](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Yes let's keep it open until the problem is fully solved.",
"@LysandreJik this is ready for review and to be merged. Confirmed I can overfit the training set on a sequence classification task and train with the `fp16` flag so this should solve all problems with FunnelTransformer.",
"There's a failing Funnel integration that should be taken care of before merging."
] | 1,600 | 1,601 | 1,601 | COLLABORATOR | null | This `.float()` should have been removed, it was necessary before I converted the attention masks to floating types at the beginning of the forward of the Encoder, but it's now useless (and bad for mixed precision as shown in #7371).
Also, the attentions masks were used the wrong way (0 for non-masked tokens, 1 for masked) which was incompatible with the way transformers tokenizers work.
Fixes #7371 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7374/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7374",
"html_url": "https://github.com/huggingface/transformers/pull/7374",
"diff_url": "https://github.com/huggingface/transformers/pull/7374.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7374.patch",
"merged_at": 1601050839000
} |
https://api.github.com/repos/huggingface/transformers/issues/7373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7373/comments | https://api.github.com/repos/huggingface/transformers/issues/7373/events | https://github.com/huggingface/transformers/pull/7373 | 708,367,260 | MDExOlB1bGxSZXF1ZXN0NDkyNjEwMzY1 | 7,373 | [RAG] Add `attention_mask` to RAG generate | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=h1) Report\n> Merging [#7373](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8d3bb781ee2643ad1076f4cbcc6f417245671e94?el=desc) will **increase** coverage by `1.44%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7373 +/- ##\n==========================================\n+ Coverage 76.61% 78.05% +1.44% \n==========================================\n Files 181 181 \n Lines 35759 35759 \n==========================================\n+ Hits 27395 27911 +516 \n+ Misses 8364 7848 -516 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `76.98% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `17.16% <0.00%> (-81.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.69% <0.00%> (-74.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.23% <0.00%> (-72.70%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.13% <0.00%> (-15.42%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=footer). Last update [8d3bb78...e4e1ea8](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,600 | 1,600 | MEMBER | null | Previously the attention mask was not passed to the generate function so that the encoder_outputs were possibly working if the batch has different sizes of input ids.
@ola13 Also fixed in eval script | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7373/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7373/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7373",
"html_url": "https://github.com/huggingface/transformers/pull/7373",
"diff_url": "https://github.com/huggingface/transformers/pull/7373.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7373.patch",
"merged_at": 1600982525000
} |
https://api.github.com/repos/huggingface/transformers/issues/7372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7372/comments | https://api.github.com/repos/huggingface/transformers/issues/7372/events | https://github.com/huggingface/transformers/pull/7372 | 708,305,591 | MDExOlB1bGxSZXF1ZXN0NDkyNTU4NTg4 | 7,372 | [RAG] Fix retrieval offset in RAG's HfIndex and better integration tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=h1) Report\n> Merging [#7372](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/571c7a11c17bd00ba3e79f4d853cc51428a14e45?el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `88.88%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7372 +/- ##\n==========================================\n- Coverage 77.64% 77.63% -0.01% \n==========================================\n Files 181 181 \n Lines 35722 35728 +6 \n==========================================\n+ Hits 27736 27738 +2 \n- Misses 7986 7990 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/retrieval\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9yZXRyaWV2YWxfcmFnLnB5) | `91.01% <88.88%> (-0.27%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-0.76%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=footer). Last update [571c7a1...9ecf660](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@lhoestq - can you check how the `RUN_SLOW` tests would change in this case? ",
"> @lhoestq - can you check how the `RUN_SLOW` tests would change in this case?\r\n\r\nThey change indeed. I updated the expected values.",
"@yjernite - could you take a final look and approve if everything seems fine to you? ",
"Okey great - this should be the last big fix for RAG. I'll rebase this PR and merge it after "
] | 1,600 | 1,601 | 1,601 | MEMBER | null | Address @yjernite 's comment in https://github.com/huggingface/transformers/pull/7129#discussion_r488904472
Indeed the retriever was returning the indexes offset by one.
Cc @ola13 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7372/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7372",
"html_url": "https://github.com/huggingface/transformers/pull/7372",
"diff_url": "https://github.com/huggingface/transformers/pull/7372.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7372.patch",
"merged_at": 1601043167000
} |
https://api.github.com/repos/huggingface/transformers/issues/7371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7371/comments | https://api.github.com/repos/huggingface/transformers/issues/7371/events | https://github.com/huggingface/transformers/issues/7371 | 708,305,304 | MDU6SXNzdWU3MDgzMDUzMDQ= | 7,371 | FunnelTransformerForSequenceClassification crashes when fine tuning with mixed precision flag | {
"login": "iAlex97",
"id": 12383594,
"node_id": "MDQ6VXNlcjEyMzgzNTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/12383594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iAlex97",
"html_url": "https://github.com/iAlex97",
"followers_url": "https://api.github.com/users/iAlex97/followers",
"following_url": "https://api.github.com/users/iAlex97/following{/other_user}",
"gists_url": "https://api.github.com/users/iAlex97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iAlex97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iAlex97/subscriptions",
"organizations_url": "https://api.github.com/users/iAlex97/orgs",
"repos_url": "https://api.github.com/users/iAlex97/repos",
"events_url": "https://api.github.com/users/iAlex97/events{/privacy}",
"received_events_url": "https://api.github.com/users/iAlex97/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for flagging!\r\nI think I have found the cause for this. Model runs fine on my end in half precision when it's applied.",
"Thanks for the quick fix, but unfortunately I checked out that branch (and installed from source) and I still get the issue at this line: https://github.com/huggingface/transformers/blob/624cb37b38574566522072c19659b4cff60b98f9/src/transformers/modeling_funnel.py#L544\r\n\r\nEdit (attached new stacktrace):\r\n```python\r\n File \"funnel.py\", line 90, in <module>\r\n trainer.train()\r\n File \"/root/transformers/src/transformers/trainer.py\", line 743, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/root/transformers/src/transformers/trainer.py\", line 1050, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/root/transformers/src/transformers/trainer.py\", line 1074, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/root/transformers/src/transformers/modeling_funnel.py\", line 1269, in forward\r\n return_dict=return_dict,\r\n File \"/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/root/transformers/src/transformers/modeling_funnel.py\", line 955, in forward\r\n return_dict=return_dict,\r\n File \"/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/root/transformers/src/transformers/modeling_funnel.py\", line 651, in forward\r\n layer_output = layer(query, key, value, attention_inputs, output_attentions=output_attentions)\r\n File \"/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/root/transformers/src/transformers/modeling_funnel.py\", line 598, in forward\r\n attn = self.attention(query, key, value, attention_inputs, output_attentions=output_attentions)\r\n File \"/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/root/transformers/src/transformers/modeling_funnel.py\", line 544, in forward\r\n content_score = torch.einsum(\"bind,bjnd->bnij\", q_head + r_w_bias, k_head)\r\n File \"/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/functional.py\", line 292, in einsum\r\n return _VF.einsum(equation, operands)\r\nRuntimeError: Expected object of scalar type Float but got scalar type Half for argument #2 'mat2' in call to _th_bmm\r\n```",
"What got me past this error was casting `.float()` on all tensor arguments to `torch.einsum()`, but then I ran into this issue:\r\n```python\r\n File \"funnel.py\", line 90, in <module>\r\n trainer.train()\r\n File \"/root/transformers/src/transformers/trainer.py\", line 743, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/root/transformers/src/transformers/trainer.py\", line 1062, in training_step\r\n scaled_loss.backward()\r\n File \"/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/tensor.py\", line 198, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n File \"/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/autograd/__init__.py\", line 100, in backward\r\n allow_unreachable=True) # allow_unreachable flag\r\nRuntimeError: expected dtype Float but got dtype Long (validate_dtype at /opt/conda/conda-bld/pytorch_1591914880026/work/aten/src/ATen/native/TensorIterator.cpp:143)\r\nframe #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x4e (0x7f49c1e64b5e in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libc10.so)\r\nframe #1: at::TensorIterator::compute_types() + 0xce3 (0x7f49ea00c113 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #2: at::TensorIterator::build() + 0x44 (0x7f49ea00eaf4 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #3: at::native::mse_loss_backward_out(at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x193 (0x7f49e9e5c043 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #4: <unknown function> + 0xdfc047 (0x7f49c30ba047 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #5: at::native::mse_loss_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x172 (0x7f49e9e64782 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #6: <unknown function> + 0xdfc2ff (0x7f49c30ba2ff in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #7: <unknown function> + 0xe20c26 (0x7f49ea294c26 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #8: <unknown function> + 0x27fd3cb (0x7f49ebc713cb in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #9: <unknown function> + 0xe20c26 (0x7f49ea294c26 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #10: torch::autograd::generated::MseLossBackward::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x1f7 (0x7f49eba78e67 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #11: <unknown function> + 0x2ae7df5 (0x7f49ebf5bdf5 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #12: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&) + 0x16f3 (0x7f49ebf590f3 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #13: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&, bool) + 0x3d2 (0x7f49ebf59ed2 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #14: torch::autograd::Engine::thread_init(int) + 0x39 (0x7f49ebf52549 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #15: torch::autograd::python::PythonEngine::thread_init(int) + 0x38 (0x7f49ef4a2638 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\r\nframe #16: <unknown function> + 0xc819d (0x7f49f1cfd19d in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6)\r\nframe #17: <unknown function> + 0x76db (0x7f4a0a4186db in /lib/x86_64-linux-gnu/libpthread.so.0)\r\nframe #18: clone + 0x3f (0x7f4a0a141a3f in /lib/x86_64-linux-gnu/libc.so.6)\r\n```",
"Okay, it turns out the first issue with `torch.einsum` was PyTorch's fault as the function did not accept mixed precision types. After updating it to `1.6.0` and recompiling nvidia APEX, I'm stuck with:\r\n```python\r\n File \"funnel.py\", line 90, in <module>\r\n trainer.train()\r\n File \"/root/transformers/src/transformers/trainer.py\", line 743, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/root/transformers/src/transformers/trainer.py\", line 1059, in training_step\r\n self.scaler.scale(loss).backward()\r\n File \"/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/tensor.py\", line 185, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n File \"/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/autograd/__init__.py\", line 127, in backward\r\n allow_unreachable=True) # allow_unreachable flag\r\nRuntimeError: Found dtype Long but expected Float\r\nException raised from compute_types at /opt/conda/conda-bld/pytorch_1595629403081/work/aten/src/ATen/native/TensorIterator.cpp:183 (most recent call first):\r\nframe #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f6b6fede77d in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libc10.so)\r\nframe #1: at::TensorIterator::compute_types(at::TensorIteratorConfig const&) + 0x259 (0x7f6ba2f35ca9 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #2: at::TensorIterator::build(at::TensorIteratorConfig&) + 0x6b (0x7f6ba2f3944b in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #3: at::TensorIterator::TensorIterator(at::TensorIteratorConfig&) + 0xdd (0x7f6ba2f39abd in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #4: at::native::mse_loss_backward_out(at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x18a (0x7f6ba2d9e71a in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #5: <unknown function> + 0xd1d610 (0x7f6b71061610 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #6: at::native::mse_loss_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x90 (0x7f6ba2d9b140 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #7: <unknown function> + 0xd1d6b0 (0x7f6b710616b0 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #8: <unknown function> + 0xd3f936 (0x7f6b71083936 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)\r\nframe #9: at::mse_loss_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x119 (0x7f6ba325dda9 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #10: <unknown function> + 0x2b5e8c9 (0x7f6ba4eb68c9 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #11: <unknown function> + 0x7f60d6 (0x7f6ba2b4e0d6 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #12: at::mse_loss_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x119 (0x7f6ba325dda9 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #13: torch::autograd::generated::MseLossBackward::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x1af (0x7f6ba4df252f in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #14: <unknown function> + 0x30d1017 (0x7f6ba5429017 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #15: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7f6ba5424860 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #16: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f6ba5425401 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #17: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7f6ba541d579 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)\r\nframe #18: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7f6ba974c99a in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\r\nframe #19: <unknown function> + 0xc819d (0x7f6bac27e19d in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6)\r\nframe #20: <unknown function> + 0x76db (0x7f6bc49996db in /lib/x86_64-linux-gnu/libpthread.so.0)\r\nframe #21: clone + 0x3f (0x7f6bc46c2a3f in /lib/x86_64-linux-gnu/libc.so.6)\r\n```",
"Due to reducing my data-set to be able to load it faster and check various fixes, I was accidentally passing only one training labels to my classifier. Fixed this and the model started training, however, `loss` is always reported as `nan`.\r\n\r\nIs this an issue? I double checked and running without mixed precision mode correctly reports the loss and I can see it decreasing between log statements.",
"I can reproduce the losses being at `nan` and will try to investigate the source of this bug. Note that starting in PyTorch 1.6, apex is not used anymore for mixed precision training since PyTorch has native support for it.",
"I have found the reason (and why I wasn't managing to fine-tune a model on some GLUE task yesterday). Turns out I was matching exactly the implementation of the authors **but** in transformers, we put 1 in attentions masks for tokens not masked... stupid me.",
"Good thing to know I don't have to build APEX next time ;)\r\n\r\nI just pulled the latest commit from your branch and can confirm loss is no longer `nan`.\r\n\r\nGreat job and thanks for assistance!"
] | 1,600 | 1,601 | 1,601 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Linux-4.15.0-45-generic-x86_64-with-debian-buster-sid
- Python version: Python 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger As I saw you were the one who worked on the PR implementing Funnel Transformer
## Information
Model I am using: Funnel Transformer
The problem arises when using:
* [ o ] the official example scripts: (give details below)
* [ x ] my own modified scripts:
Only when enabling the mixed precision flag. I am now training the model without it, but I had to lower the batch size, thus increasing the training time.
I have to mention that I just fined tuned a `roberta-base` model using `fp16=True` and `fp16_opt_level='O1'`, thus nvidia APEX is properly installed/configured.
The tasks I am working on is:
* [ o ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset:
Basically I am trying to fine tune `FunnelForSequenceClassification` using my own custom data-set:
```python
# some code to load data from CSV
# ...
# wrapper around PyTorch for holding datasets
class IMDbDataset(torch.utils.data.Dataset):
# same code as in the Huggingface docs
# ...
# load tokenizer
tokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/large-base')
# tokenize texts
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
val_encodings = tokenizer(val_texts, truncation=True, padding=True)
test_encodings = tokenizer(test_texts, truncation=True, padding=True)
train_dataset = IMDbDataset(train_encodings, train_labels)
val_dataset = IMDbDataset(val_encodings, val_labels)
test_dataset = IMDbDataset(test_encodings, test_labels)
# training args used
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
#learning_rate=35e-6,
weight_decay=0.01, # strength of weight decay
warmup_steps=500, # number of warmup steps for learning rate scheduler
logging_dir='./logs', # directory for storing logs
logging_steps=10,
fp16=True,
fp16_opt_level='O1' # here I tried both O1 and O2 with the same result
)
model = FunnelForSequenceClassification.from_pretrained('funnel-transformer/large-base',
return_dict=True,
num_labels=max(train_labels)+1)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
trainer.save_model('funnel')
```
## To reproduce
Steps to reproduce the behavior:
1. Run script
2. Wait for script to reach the training part
Stacktrace:
```
File "funnel.py", line 89, in <module>
trainer.train()
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/trainer.py", line 741, in train
tr_loss += self.training_step(model, inputs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/trainer.py", line 1046, in training_step
loss = self.compute_loss(model, inputs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/trainer.py", line 1070, in compute_loss
outputs = model(**inputs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/modeling_funnel.py", line 1263, in forward
return_dict=return_dict,
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/modeling_funnel.py", line 950, in forward
return_dict=return_dict,
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/modeling_funnel.py", line 655, in forward
layer_output = layer(query, key, value, attention_inputs, output_attentions=output_attentions)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/modeling_funnel.py", line 602, in forward
attn = self.attention(query, key, value, attention_inputs, output_attentions=output_attentions)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/modeling_funnel.py", line 548, in forward
content_score = torch.einsum("bind,bjnd->bnij", q_head + r_w_bias, k_head)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/functional.py", line 292, in einsum
return _VF.einsum(equation, operands)
RuntimeError: Expected object of scalar type Float but got scalar type Half for argument #2 'mat2' in call to _th_bmm
```
[This](https://github.com/NVIDIA/apex/issues/302#issuecomment-552198322) seems like a very similar issue.
## Expected behavior
We should be able to train the model with mixed precision to use VRAM more efficiently. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7371/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7370/comments | https://api.github.com/repos/huggingface/transformers/issues/7370/events | https://github.com/huggingface/transformers/issues/7370 | 708,279,452 | MDU6SXNzdWU3MDgyNzk0NTI= | 7,370 | Add new PET Model | {
"login": "sagarreddypatil",
"id": 16482184,
"node_id": "MDQ6VXNlcjE2NDgyMTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/16482184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sagarreddypatil",
"html_url": "https://github.com/sagarreddypatil",
"followers_url": "https://api.github.com/users/sagarreddypatil/followers",
"following_url": "https://api.github.com/users/sagarreddypatil/following{/other_user}",
"gists_url": "https://api.github.com/users/sagarreddypatil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sagarreddypatil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sagarreddypatil/subscriptions",
"organizations_url": "https://api.github.com/users/sagarreddypatil/orgs",
"repos_url": "https://api.github.com/users/sagarreddypatil/repos",
"events_url": "https://api.github.com/users/sagarreddypatil/events{/privacy}",
"received_events_url": "https://api.github.com/users/sagarreddypatil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"The readme in the repo still says this:\r\n\r\n> :rotating_light: This repository does not yet contain the modifications to PET introduced in \"[It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners](https://arxiv.org/abs/2009.07118)\" but will be updated soon.",
"Looks like the authors updated the repo and added the necessary model.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"While I don't have the time to add PET to this repository myself, I'm always happy to help if someone wants to take it on :)"
] | 1,600 | 1,615 | null | NONE | null | # 🌟 New model addition
## Model description
A new article just landed on ArXiv: https://arxiv.org/pdf/2009.07118.pdf
An implementation will eventually be available at https://github.com/timoschick/pet
Authors are @timoschick and Hinrich Schutze.
I didn't see any pre-trained models linked on the GitHub README, but the model is pretty small and easy to train.
Update: the code is available open source along, and it can presumably use pretrained BERT models(I do not know how this works, bu the GitHub page states that the roberta-large pretrained model can be used). The model also works unsupservised.
## Open source status
* [x] the model implementation is available: (give details)
* [x] the model weights are available: (give details)
* [x] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7370/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7370/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7369/comments | https://api.github.com/repos/huggingface/transformers/issues/7369/events | https://github.com/huggingface/transformers/issues/7369 | 708,267,832 | MDU6SXNzdWU3MDgyNjc4MzI= | 7,369 | The absence of source/target language parameters when using MBart in Summarization example | {
"login": "shiningliang",
"id": 11460366,
"node_id": "MDQ6VXNlcjExNDYwMzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/11460366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shiningliang",
"html_url": "https://github.com/shiningliang",
"followers_url": "https://api.github.com/users/shiningliang/followers",
"following_url": "https://api.github.com/users/shiningliang/following{/other_user}",
"gists_url": "https://api.github.com/users/shiningliang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shiningliang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shiningliang/subscriptions",
"organizations_url": "https://api.github.com/users/shiningliang/orgs",
"repos_url": "https://api.github.com/users/shiningliang/repos",
"events_url": "https://api.github.com/users/shiningliang/events{/privacy}",
"received_events_url": "https://api.github.com/users/shiningliang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"add \r\n```\r\n self.dataset_kwargs[\"src_lang\"] = hparams.src_lang\r\n self.dataset_kwargs[\"tgt_lang\"] = hparams.tgt_lang\r\n```\r\nhere https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L70"
] | 1,600 | 1,600 | 1,600 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Ubuntu 16.04
- Python version: 3.7
- PyTorch version (GPU?): 1.6.0+cu101
- Tensorflow version (GPU?): 1.15
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
Summarization: @sshleifer
## Information
Model I am using (Bert, XLNet ...): MBart
The problem arises when using:
* [x] the official example scripts: (give details below)
I'm following the example for finetuning a summarization model.
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
xsum
## To reproduce
Steps to reproduce the behavior:
1. Using the script [finetune.sh](https://github.com/huggingface/transformers/blob/78387cc63e/examples/seq2seq/finetune.sh)
2. Keep all the default parameters
3. Add --model_name_or_path facebook/mbart-large-cc25 --data_dir datasets/xsum --src_lang en_XX --tgt_lang en_XX
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last):
File "/home/shining/code/seq2seq/finetune.py", line 440, in <module>
main(args)
File "/home/shining/code/seq2seq/finetune.py", line 415, in main
logger=logger,
File "/home/shining/code/seq2seq/lightning_base.py", line 385, in generic_train
trainer.fit(model)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1073, in fit
results = self.accelerator_backend.train(model)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_backend.py", line 51, in train
results = self.trainer.run_pretrain_routine(model)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 305, in _evaluate
for batch_idx, batch in enumerate(dataloader):
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 363, in __next__
data = self._next_data()
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data
return self._process_data(data)
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data
data.reraise()
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop
data = fetcher.fetch(index)
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/shining/code/seq2seq/utils.py", line 232, in collate_fn
add_prefix_space=self.add_prefix_space,
File "/home/shining/miniconda3/lib/python3.7/site-packages/transformers/tokenization_mbart.py", line 236, in prepare_seq2seq_batch
self.set_src_lang_special_tokens(src_lang)
File "/home/shining/miniconda3/lib/python3.7/site-packages/transformers/tokenization_mbart.py", line 268, in set_src_lang_special_tokens
self.cur_lang_code = self.lang_code_to_id[src_lang]
KeyError: None
```
## Expected behavior
Because the summarization example uses pytorch-linghtning backend I could only track the bug in the collate_fn function in Seq2SeqDataset. I noticed that the parameter self.src_lang=None.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7369/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7368/comments | https://api.github.com/repos/huggingface/transformers/issues/7368/events | https://github.com/huggingface/transformers/pull/7368 | 708,222,744 | MDExOlB1bGxSZXF1ZXN0NDkyNDg5NjYy | 7,368 | Formatter | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=h1) Report\n> Merging [#7368](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cffa424f855cbbd657c4f1b57f94a51b7aa8d6d?el=desc) will **decrease** coverage by `1.34%`.\n> The diff coverage is `22.22%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7368 +/- ##\n==========================================\n- Coverage 76.51% 75.17% -1.35% \n==========================================\n Files 181 181 \n Lines 34851 34860 +9 \n==========================================\n- Hits 26666 26205 -461 \n- Misses 8185 8655 +470 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/utils/logging.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlscy9sb2dnaW5nLnB5) | `79.31% <22.22%> (-6.59%)` | :arrow_down: |\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/tokenization\\_phobert.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGhvYmVydC5weQ==) | `21.80% <0.00%> (-61.66%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.34% <0.00%> (-42.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `62.69% <0.00%> (-28.58%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=footer). Last update [0cffa42...ee18eb2](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,600 | 1,600 | MEMBER | null | Add two new methods to the logging utility to automatically set the format like it is done in the `examples/` folder. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7368/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7368",
"html_url": "https://github.com/huggingface/transformers/pull/7368",
"diff_url": "https://github.com/huggingface/transformers/pull/7368.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7368.patch",
"merged_at": 1600959562000
} |
https://api.github.com/repos/huggingface/transformers/issues/7367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7367/comments | https://api.github.com/repos/huggingface/transformers/issues/7367/events | https://github.com/huggingface/transformers/issues/7367 | 708,215,158 | MDU6SXNzdWU3MDgyMTUxNTg= | 7,367 | Finetuning Pegasus for summarization task | {
"login": "banunitte",
"id": 6847024,
"node_id": "MDQ6VXNlcjY4NDcwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6847024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/banunitte",
"html_url": "https://github.com/banunitte",
"followers_url": "https://api.github.com/users/banunitte/followers",
"following_url": "https://api.github.com/users/banunitte/following{/other_user}",
"gists_url": "https://api.github.com/users/banunitte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/banunitte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/banunitte/subscriptions",
"organizations_url": "https://api.github.com/users/banunitte/orgs",
"repos_url": "https://api.github.com/users/banunitte/repos",
"events_url": "https://api.github.com/users/banunitte/events{/privacy}",
"received_events_url": "https://api.github.com/users/banunitte/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"notebook link :\r\nhttps://colab.research.google.com/drive/1c7G1WXE6mgl2rwA-VR7q8DAqmbVqB62m?usp=sharing#scrollTo=VRzl54I-5isw ",
"\r\n",
"Facing the same issue. A reply on this will be highly appreciated.",
"This might help! Though implementation documentation is in tensorflow\r\n\r\nhttps://github.com/google-research/pegasus#finetuning-on-downstream-datasets",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,600 | 1,619 | 1,619 | NONE | null | i have been trying to fine tune Pegasus for summarization task, it worked fine without getting any error.
but when i tried to generate the summary i was getting only empty list as a output.
i am not able to figure it out, is anything wrong with my fine tuning script?
```py
def label_smoothed_nll_loss(lprobs, target, epsilon, ignore_index=-100):
"""From fairseq"""
if target.dim() == lprobs.dim() - 1:
target = target.unsqueeze(-1)
nll_loss = -lprobs.gather(dim=-1, index=target)
smooth_loss = -lprobs.sum(dim=-1, keepdim=True)
if ignore_index is not None:
pad_mask = target.eq(ignore_index)
nll_loss.masked_fill_(pad_mask, 0.0)
smooth_loss.masked_fill_(pad_mask, 0.0)
else:
nll_loss = nll_loss.squeeze(-1)
smooth_loss = smooth_loss.squeeze(-1)
nll_loss = nll_loss.sum() # mean()? Scared to break other math.
smooth_loss = smooth_loss.sum()
eps_i = epsilon / lprobs.size(-1)
loss = (1.0 - epsilon) * nll_loss + eps_i * smooth_loss
return loss, nll_loss
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": weight_decay,
},
{"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0},
]
optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate, eps=adam_epsilon)
scheduler = get_linear_schedule_with_warmup(
optimizer, num_warmup_steps=warmup_steps, num_training_steps=t_total
)
pad_token_id = tokenizer.pad_token_id
epochs = 5
for epoc in range(epochs):
t0 = time.time()
print("")
print('======== Epoch {} ========'.format(epoc+1))
model.train()
total_train_loss = 0
for i,batch in enumerate(train_dataset):
title = []
body = []
for item in batch['title'].numpy():
title.append(item.decode('utf-8'))
for item in batch['body'].numpy():
body.append(item.decode('utf-8'))
batch_tokens = tokenizer.prepare_seq2seq_batch(body,title,max_length=320,max_target_length=60,truncation=True,padding='max_length').to(device)
decoder_input_ids = shift_tokens_right(batch_tokens['labels'], pad_token_id)
outputs = model(batch_tokens['input_ids'], attention_mask=batch_tokens['attention_mask'], decoder_input_ids=decoder_input_ids, use_cache=False)
lm_logits = outputs[0]
lprobs = torch.nn.functional.log_softmax(lm_logits, dim=-1)
loss, nll_loss = label_smoothed_nll_loss(
lprobs, batch_tokens['labels'],0.1, ignore_index=pad_token_id
)
total_train_loss += loss.item()
optimizer.zero_grad()
loss.backward()
#torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7367/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7366/comments | https://api.github.com/repos/huggingface/transformers/issues/7366/events | https://github.com/huggingface/transformers/issues/7366 | 708,190,388 | MDU6SXNzdWU3MDgxOTAzODg= | 7,366 | test_rag_sequence_generate_batch failing on CUDA | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2373468354,
"node_id": "MDU6TGFiZWwyMzczNDY4MzU0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/rag",
"name": "rag",
"color": "e58e85",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yeah, If you run on CPU the test passes - I added a comment that the test fails on GPU: https://github.com/huggingface/transformers/blob/9e68d075a4100906509170498480823e7e61874a/tests/test_modeling_rag.py#L659\r\n\r\nBeam search seems very sensible to small changes.",
"you mean sensitive, but OK. Maybe we should skip the test on CUDA so that slow CI isn't broken?",
"Actually, I fixed all `RagSequence` related bugs today and added better integration tests that should all pass on both CPU and GPU => so I think it's fine now.\r\n\r\nSee https://github.com/huggingface/transformers/blob/cf1c88e0921243e760d306e63a5938e1bac880f3/tests/test_modeling_rag.py#L664"
] | 1,600 | 1,601 | 1,601 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/runs/1157849932?check_suite_focus=true
```
> self.assertEqual(output_text_1, EXPECTED_OUTPUT_TEXT_1)
E AssertionError: 'The song peaked at number 17 in the' != '"I Know Him So Well"'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7366/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7365/comments | https://api.github.com/repos/huggingface/transformers/issues/7365/events | https://github.com/huggingface/transformers/pull/7365 | 708,186,057 | MDExOlB1bGxSZXF1ZXN0NDkyNDU5MjUy | 7,365 | Fixing case in which `Trainer` hung while saving model in distributed training | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=h1) Report\n> Merging [#7365](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28cf873036d078b47fb9dd38ac3421a7c874da44?el=desc) will **increase** coverage by `2.52%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7365 +/- ##\n==========================================\n+ Coverage 76.58% 79.11% +2.52% \n==========================================\n Files 181 181 \n Lines 34828 34827 -1 \n==========================================\n+ Hits 26674 27552 +878 \n+ Misses 8154 7275 -879 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.70% <50.00%> (+0.08%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.39% <0.00%> (-51.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.10% <0.00%> (-29.80%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |\n| [src/transformers/configuration\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `97.77% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.62% <0.00%> (-1.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=footer). Last update [28cf873...3213d34](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Reminder that we need to add some CI infra (and tests) for multi-gpu and/or multi-node setups"
] | 1,600 | 1,601 | 1,600 | CONTRIBUTOR | null | As found thanks to the great @mfuntowicz , the call to `store_flos` in `Trainer` can hang indefinitely, as it was only executed in the main thread and in some cases the other threads were already past this point. This PR moves this call in order to avoid this behaviour. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7365/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7365/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7365",
"html_url": "https://github.com/huggingface/transformers/pull/7365",
"diff_url": "https://github.com/huggingface/transformers/pull/7365.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7365.patch",
"merged_at": 1600955800000
} |
https://api.github.com/repos/huggingface/transformers/issues/7364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7364/comments | https://api.github.com/repos/huggingface/transformers/issues/7364/events | https://github.com/huggingface/transformers/issues/7364 | 708,115,983 | MDU6SXNzdWU3MDgxMTU5ODM= | 7,364 | Getting "TypeError: forward() got multiple values for argument 'attention_mask'" when replacing pytorch_transformers with transformers | {
"login": "wailoktam",
"id": 12331528,
"node_id": "MDQ6VXNlcjEyMzMxNTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/12331528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wailoktam",
"html_url": "https://github.com/wailoktam",
"followers_url": "https://api.github.com/users/wailoktam/followers",
"following_url": "https://api.github.com/users/wailoktam/following{/other_user}",
"gists_url": "https://api.github.com/users/wailoktam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wailoktam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wailoktam/subscriptions",
"organizations_url": "https://api.github.com/users/wailoktam/orgs",
"repos_url": "https://api.github.com/users/wailoktam/repos",
"events_url": "https://api.github.com/users/wailoktam/events{/privacy}",
"received_events_url": "https://api.github.com/users/wailoktam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi, I believe this is the cause of your issue: https://huggingface.co/transformers/migration.html#positional-order-of-some-models-keywords-inputs-attention-mask-token-type-ids-changed",
"Thanks. I agree. Can you suggest where I should fix in the codes given in the error log?",
"I can't really see where's your code, do you mind pasting a snippet that reproduces the error? (using backticks \\`\\`\\` to format it)",
"Thanks. I highlight the codes given in the error log. In case, it is of no use. Please let me know how I should dig up the relevant portion. \r\n\r\n/content/train_extractive.py in train_ext(args, device_id)\r\n```\r\n225 train_multi_ext(args)\r\n226 else:\r\n--> 227 train_single_ext(args, device_id)\r\n228\r\n229\r\n```\r\n\r\n/content/train_extractive.py in train_single_ext(args, device_id)\r\n```\r\n267\r\n268 trainer = build_trainer(args, device_id, model, optim)\r\n--> 269 trainer.train(train_iter_fct, args.train_steps)\r\n```\r\n\r\n/content/trainer_ext.py in train(self, train_iter_fct, train_steps, valid_iter_fct, valid_steps)\r\n\r\n```\r\n150 self._gradient_accumulation(\r\n151 true_batchs, normalization, total_stats,\r\n--> 152 report_stats)\r\n153\r\n154 report_stats = self._maybe_report_training(\r\n```\r\n\r\n/content/trainer_ext.py in _gradient_accumulation(self, true_batchs, normalization, total_stats, report_stats)\r\n```\r\n393 mask_cls = batch.mask_cls\r\n394\r\n--> 395 sent_scores, mask = self.model(src, segs, clss, mask, mask_cls)\r\n396\r\n397 loss = self.loss(sent_scores, labels.float())\r\n```\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n```\r\n720 result = self._slow_forward(*input, **kwargs)\r\n721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n723 for hook in itertools.chain(\r\n724 _global_forward_hooks.values(),\r\n```\r\n\r\n/content/model_builder.py in forward(self, src, segs, clss, mask_src, mask_cls)\r\n```\r\n176 print (type(mask_src))\r\n177 print (mask_src)\r\n--> 178 top_vec = self.bert(src, segs, mask_src)\r\n179 sents_vec = top_vec[torch.arange(top_vec.size(0)).unsqueeze(1), clss]\r\n180 sents_vec = sents_vec * mask_cls[:, :, None].float()\r\n```\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n```\r\n720 result = self._slow_forward(*input, **kwargs)\r\n721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n723 for hook in itertools.chain(\r\n724 _global_forward_hooks.values(),\r\n```\r\n\r\n/content/model_builder.py in forward(self, x, segs, mask)\r\n```\r\n126 def forward(self, x, segs, mask):\r\n127 if(self.finetune):\r\n--> 128 top_vec, _ = self.model(x, segs, attention_mask=mask)\r\n129 else:\r\n130 self.eval()\r\n```\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n\r\n```\r\n720 result = self._slow_forward(*input, **kwargs)\r\n721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n723 for hook in itertools.chain(\r\n724 _global_forward_hooks.values(),\r\n```\r\n",
"I try replacing all the parameters in \r\n\r\n```\r\n--> 128 top_vec, _ = self.model(x, segs, attention_mask=mask)\r\n```\r\nof /content/model_builder.py in forward(self, x, segs, mask)\r\nto\r\n```\r\n--> 128 top_vec, _ = self.model(input_ids=x, token_type_ids=segs, attention_mask=mask)\r\n```\r\nNow the changed line gives this error:\r\n\r\nValueError: not enough values to unpack (expected 2, got 1)\r\n\r\nNow I am left with no clue. \r\n",
"I remove the \"_\" from the returned value of model in \r\n\r\n```\r\n--> 128 top_vec, _ = self.model(input_ids=x, token_type_ids=segs, attention_mask=mask)\r\n```\r\n\r\nAnd get away with the error: ValueError: not enough values to unpack (expected 2, got 1)\r\n\r\nBut now I receive from this line:\r\n\r\n```\r\n 178 print (mask_src)\r\n 179 top_vec = self.bert(src, segs, mask_src)\r\n--> 180 sents_vec = top_vec[torch.arange(top_vec.size(0)).unsqueeze(1), clss]\r\n 181 sents_vec = sents_vec * mask_cls[:, :, None].float()\r\n 182 sent_scores = self.ext_layer(sents_vec, mask_cls).squeeze(-1)\r\n```\r\n\r\nof /content/model_builder.py in forward(self, src, segs, clss, mask_src, mask_cls) the following complaint:\r\n\r\nAttributeError: 'tuple' object has no attribute 'size'\r\n\r\nSo the returned value of bert has changed. bert is an instance of BertForMaskedLM.from_pretrained('cl-tohoku/bert-base-japanese-whole-word-masking'). How do I get back the size of the first returned value of a pretrained model in the old torch_transformers?\r\n",
"Instead of simply removing the `_` value, which will not unpack the tuple anymore, you can get the first value of the tuple (which has a single value in your case):\r\n\r\n```py\r\ntop_vec = self.model(x, segs, attention_mask=mask)[0]\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,608 | 1,608 | NONE | null | # 📚 Migration
## Information
<!-- Important information -->
Model I am using (Bert):
Language I am using the model on (Japanese):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [* ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [* ] my own task or dataset: (give details below)
## Details
<!-- A clear and concise description of the migration issue.
If you have code snippets, please provide it here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
-->
This is the complaint from python:
/content/train_extractive.py in train_ext(args, device_id)
225 train_multi_ext(args)
226 else:
--> 227 train_single_ext(args, device_id)
228
229
/content/train_extractive.py in train_single_ext(args, device_id)
267
268 trainer = build_trainer(args, device_id, model, optim)
--> 269 trainer.train(train_iter_fct, args.train_steps)
/content/trainer_ext.py in train(self, train_iter_fct, train_steps, valid_iter_fct, valid_steps)
150 self._gradient_accumulation(
151 true_batchs, normalization, total_stats,
--> 152 report_stats)
153
154 report_stats = self._maybe_report_training(
/content/trainer_ext.py in _gradient_accumulation(self, true_batchs, normalization, total_stats, report_stats)
393 mask_cls = batch.mask_cls
394
--> 395 sent_scores, mask = self.model(src, segs, clss, mask, mask_cls)
396
397 loss = self.loss(sent_scores, labels.float())
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/content/model_builder.py in forward(self, src, segs, clss, mask_src, mask_cls)
176 print (type(mask_src))
177 print (mask_src)
--> 178 top_vec = self.bert(src, segs, mask_src)
179 sents_vec = top_vec[torch.arange(top_vec.size(0)).unsqueeze(1), clss]
180 sents_vec = sents_vec * mask_cls[:, :, None].float()
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/content/model_builder.py in forward(self, x, segs, mask)
126 def forward(self, x, segs, mask):
127 if(self.finetune):
--> 128 top_vec, _ = self.model(x, segs, attention_mask=mask)
129 else:
130 self.eval()
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
TypeError: forward() got multiple values for argument 'attention_mask'
***
I get the above complaint after replacing pytorch-transfomers with transformers.
from pytorch_transformers import BertModel
->
from transformers import BertForMaskedLM
I have to make this change because I am importing the Japanese model when the original code calling BertModel only caters to English model
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
python: can't open file 'transformers-cli': [Errno 2] No such file or directory
- `transformers` version:
- Platform: ubuntu colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101
- Tensorflow version (GPU?): 2.3
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
<!-- IMPORTANT: which version of the former library do you use? -->
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch):
pytorch-transformers
## Checklist
- [ *] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ *] I checked if a related official extension example runs on my machine.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7364/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7364/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7363/comments | https://api.github.com/repos/huggingface/transformers/issues/7363/events | https://github.com/huggingface/transformers/pull/7363 | 708,067,314 | MDExOlB1bGxSZXF1ZXN0NDkyMzU5NTE5 | 7,363 | Check config type using `type` instead of `isinstance` | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In that case we can even remove the for loops entirely, no?",
"I agree with @julien-c, we can directly check if `type(config)` is in the dict.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=h1) Report\n> Merging [#7363](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cffa424f855cbbd657c4f1b57f94a51b7aa8d6d?el=desc) will **increase** coverage by `0.22%`.\n> The diff coverage is `43.52%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7363 +/- ##\n==========================================\n+ Coverage 76.51% 76.73% +0.22% \n==========================================\n Files 181 181 \n Lines 34851 34811 -40 \n==========================================\n+ Hits 26666 26713 +47 \n+ Misses 8185 8098 -87 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `74.30% <25.00%> (+4.95%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `87.04% <55.00%> (+3.00%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `92.53% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `88.28% <0.00%> (+55.85%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=footer). Last update [0cffa42...1833b45](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Done! @julien-c @sgugger ",
"This is faster than catching a KeyError, that's what you do it that way?"
] | 1,600 | 1,601 | 1,601 | MEMBER | null | This seems like the textbook case where using `type` should be preferred over using `isinstance`.
Thanks to @hjptriplebee for showing the way in https://github.com/huggingface/transformers/pull/6870, this PR does the same for all remaining cases. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7363/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7363",
"html_url": "https://github.com/huggingface/transformers/pull/7363",
"diff_url": "https://github.com/huggingface/transformers/pull/7363.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7363.patch",
"merged_at": 1601024950000
} |
https://api.github.com/repos/huggingface/transformers/issues/7362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7362/comments | https://api.github.com/repos/huggingface/transformers/issues/7362/events | https://github.com/huggingface/transformers/issues/7362 | 708,058,196 | MDU6SXNzdWU3MDgwNTgxOTY= | 7,362 | Difference between tokenize chinese char | {
"login": "wlhgtc",
"id": 16603773,
"node_id": "MDQ6VXNlcjE2NjAzNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wlhgtc",
"html_url": "https://github.com/wlhgtc",
"followers_url": "https://api.github.com/users/wlhgtc/followers",
"following_url": "https://api.github.com/users/wlhgtc/following{/other_user}",
"gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions",
"organizations_url": "https://api.github.com/users/wlhgtc/orgs",
"repos_url": "https://api.github.com/users/wlhgtc/repos",
"events_url": "https://api.github.com/users/wlhgtc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wlhgtc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,607 | 1,607 | CONTRIBUTOR | null | The function `BertTokenizer` has a parameter `tokenize_chinese_chars` and default True.
When I set it to false, I got different result as follows:
```
1. tokenize chinese char: ['任', '务']
2. not tokenize chinese char: ['任', '##务']
```
The code is as follows(and 任务 means task in English):
```
vocab_file = './resources/robert/vocab.txt'
bert_tokenizer1 = BertTokenizer(vocab_file, tokenize_chinese_chars=True)
bert_tokenizer2 = BertTokenizer(vocab_file, tokenize_chinese_chars=False)
text = '任务'
res1 = bert_tokenizer1.tokenize(text)
res2 = bert_tokenizer2.tokenize(text)
print('tokenize chinese char:', res1)
print('not tokenize chinese char:' ,res2)
```
If I use the default setting, I will get the first result. In that way, **the nearly half of vocab words will not used**(like `'##务'`)!
Cause we split all chinese char, it will not get `'任务'` in `WordpieceTokenizer`. It's wired.
Can somebody explain this setting for me ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7362/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7361/comments | https://api.github.com/repos/huggingface/transformers/issues/7361/events | https://github.com/huggingface/transformers/issues/7361 | 708,051,001 | MDU6SXNzdWU3MDgwNTEwMDE= | 7,361 | ImportError: cannot import name 'AutoModelForTokenClassification' | {
"login": "ganeshkharad2",
"id": 20132026,
"node_id": "MDQ6VXNlcjIwMTMyMDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/20132026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ganeshkharad2",
"html_url": "https://github.com/ganeshkharad2",
"followers_url": "https://api.github.com/users/ganeshkharad2/followers",
"following_url": "https://api.github.com/users/ganeshkharad2/following{/other_user}",
"gists_url": "https://api.github.com/users/ganeshkharad2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ganeshkharad2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ganeshkharad2/subscriptions",
"organizations_url": "https://api.github.com/users/ganeshkharad2/orgs",
"repos_url": "https://api.github.com/users/ganeshkharad2/repos",
"events_url": "https://api.github.com/users/ganeshkharad2/events{/privacy}",
"received_events_url": "https://api.github.com/users/ganeshkharad2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, what is your transformers version? Can you run `transformers-cli env` and paste the result here?",
"Here is the output for transformers-cli env\n\n2020-09-24 16:56:51.133770: W\ntensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not\nload dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1:\ncannot open shared object file: No such file or directory\n2020-09-24 16:56:51.133813: I\ntensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart\ndlerror if you do not have a GPU set up on your machine.\nWARNING:tensorflow:From\n/home/gnaeshkharad/allenv/t5env/lib/python3.6/site-packages/transformers/commands/env.py:36:\nis_gpu_available (from tensorflow.python.framework.test_util) is deprecated\nand will be removed in a future version.\nInstructions for updating:\nUse `tf.config.list_physical_devices('GPU')` instead.\n2020-09-24 16:56:53.097782: I\ntensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary\nis optimized with oneAPI Deep Neural Network Library (oneDNN)to use the\nfollowing CPU instructions in performance-critical operations: AVX2 FMA\nTo enable them in other operations, rebuild TensorFlow with the appropriate\ncompiler flags.\n2020-09-24 16:56:53.134585: I\ntensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency:\n2099940000 Hz\n2020-09-24 16:56:53.135040: I\ntensorflow/compiler/xla/service/service.cc:168] XLA service 0x5ffa320\ninitialized for platform Host (this does not guarantee that XLA will be\nused). Devices:\n2020-09-24 16:56:53.135065: I\ntensorflow/compiler/xla/service/service.cc:176] StreamExecutor device\n(0): Host, Default Version\n2020-09-24 16:56:53.137882: W\ntensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not\nload dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open\nshared object file: No such file or directory\n2020-09-24 16:56:53.137898: W\ntensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit:\nUNKNOWN ERROR (303)\n2020-09-24 16:56:53.137915: I\ntensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does\nnot appear to be running on this host (4F4W0X2):\n/proc/driver/nvidia/version does not exist\n\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two\nlast points.\n\n- `transformers` version: 3.2.0\n- Platform: Linux-5.4.0-47-generic-x86_64-with-Ubuntu-18.04-bionic\n- Python version: 3.6.9\n- PyTorch version (GPU?): 1.6.0 (False)\n- Tensorflow version (GPU?): 2.3.0 (False)\n- Using GPU in script?: <fill in>\n- Using distributed or parallel set-up in script?: <fill in>\n\nOn Thu, Sep 24, 2020 at 4:12 PM Lysandre Debut <[email protected]>\nwrote:\n\n> Hi, what is your transformers version? Can you run transformers-cli env\n> and paste the result here?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7361#issuecomment-698265526>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEZTBOR3ILZVKYMSCLZLXBTSHMPC5ANCNFSM4RYD7THQ>\n> .\n>\n",
"Could you run:\r\n\r\n```\r\nimport transformers \r\ntransformers.is_torch_available() \r\n```\r\n\r\nDoes this command returns `True` 🤔",
"Got False\r\n\r\nI have found the issue its in my environment...\r\nMy bad..Thanks for your time!!",
"Great, thanks @stefan-it!",
"> from transformers import AutoTokenizer, AutoModelForTokenClassification\r\n\r\nHi, i also meet this issue, transformers.is_torch_available() gives me true, but i still can't import AutoModelForTokenClassification",
"> > from transformers import AutoTokenizer, AutoModelForTokenClassification\r\n> \r\n> Hi, i also meet this issue, transformers.is_torch_available() gives me true, but i still can't import AutoModelForTokenClassification\r\n\r\nupdate transformers will be fine\r\n",
"> Could you run:\r\n> \r\n> ```\r\n> import transformers \r\n> transformers.is_torch_available() \r\n> ```\r\n> \r\n> Does this command returns `True` 🤔\r\n\r\nHi @stefan-it, the follwoing code snippet returns true but shows the same import error."
] | 1,600 | 1,662 | 1,600 | CONTRIBUTOR | null | # was trying to use below model but got import error for AutoModelForTokenClassification
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-large-discriminator-finetuned-conll03-english")
model = AutoModelForTokenClassification.from_pretrained("dbmdz/electra-large-discriminator-finetuned-conll03-english") | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7361/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7360/comments | https://api.github.com/repos/huggingface/transformers/issues/7360/events | https://github.com/huggingface/transformers/issues/7360 | 707,952,755 | MDU6SXNzdWU3MDc5NTI3NTU= | 7,360 | How to add some parameters in T5 (in T5Block layer) and initialize the original T5 parameters with pre-trained model and the new introduced parameters randomly? | {
"login": "SuHe36",
"id": 22442305,
"node_id": "MDQ6VXNlcjIyNDQyMzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/22442305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuHe36",
"html_url": "https://github.com/SuHe36",
"followers_url": "https://api.github.com/users/SuHe36/followers",
"following_url": "https://api.github.com/users/SuHe36/following{/other_user}",
"gists_url": "https://api.github.com/users/SuHe36/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuHe36/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuHe36/subscriptions",
"organizations_url": "https://api.github.com/users/SuHe36/orgs",
"repos_url": "https://api.github.com/users/SuHe36/repos",
"events_url": "https://api.github.com/users/SuHe36/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuHe36/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The simplest way to do this would be simply to update the `modeling_t5.py` file. You should first clone the repo and install that version in your virtual environment:\r\n\r\n```bash\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install -e \".[dev]\"\r\n```\r\n\r\nRight now if you load a T5 model it should tell you what layers it's ignoring:\r\n\r\n```py\r\nfrom transformers import T5Model\r\n\r\nmodel = T5Model.from_pretrained(\"t5-small\")\r\n```\r\nResults in:\r\n```\r\nSome weights of T5Model were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nNow if you edit the `modeling_t5.py` file, especially the `T5Block` as you mentioned:\r\n\r\n```py\r\nclass T5Block(nn.Module):\r\n def __init__(self, config, has_relative_attention_bias=False):\r\n super().__init__()\r\n self.is_decoder = config.is_decoder\r\n self.layer = nn.ModuleList()\r\n self.layer.append(T5LayerSelfAttention(config, has_relative_attention_bias=has_relative_attention_bias))\r\n if self.is_decoder:\r\n self.layer.append(T5LayerCrossAttention(config, has_relative_attention_bias=has_relative_attention_bias))\r\n\r\n self.layer.append(T5LayerFF(config))\r\n\r\n # ADDED LAYER BELOW\r\n self.extra_layer = nn.Linear(200, 200)\r\n```\r\nI've simply added an extra layer here called \"extra_layer\". I haven't done anything with it in the forward, it's up to you to decide how to use it. If you now re-run the code, you should see the following:\r\n\r\n```\r\nSome weights of T5Model were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.embed_tokens.weight', 'encoder.block.0.extra_layer.weight', 'encoder.block.0.extra_layer.bias', 'encoder.block.1.extra_layer.weight', 'encoder.block.1.extra_layer.bias', 'encoder.block.2.extra_layer.weight', 'encoder.block.2.extra_layer.bias', 'encoder.block.3.extra_layer.weight', 'encoder.block.3.extra_layer.bias', 'encoder.block.4.extra_layer.weight', 'encoder.block.4.extra_layer.bias', 'encoder.block.5.extra_layer.weight', 'encoder.block.5.extra_layer.bias', 'decoder.embed_tokens.weight', 'decoder.block.0.extra_layer.weight', 'decoder.block.0.extra_layer.bias', 'decoder.block.1.extra_layer.weight', 'decoder.block.1.extra_layer.bias', 'decoder.block.2.extra_layer.weight', 'decoder.block.2.extra_layer.bias', 'decoder.block.3.extra_layer.weight', 'decoder.block.3.extra_layer.bias', 'decoder.block.4.extra_layer.weight', 'decoder.block.4.extra_layer.bias', 'decoder.block.5.extra_layer.weight', 'decoder.block.5.extra_layer.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\nWhich means that all these new layers (all the extra_layer layers in T5Block) have been initialized randomly. The rest has been initialized from the checkpoint.\r\n\r\nHope this helps!",
"@LysandreJik Thanks, It helps to me!",
"@LysandreJik @SuHe36 I want to change some model parameters in the t5 model.\r\nSo Is it possible to edit the model class in **modeling_t5.py** if I already installed the transformer library using pip in my machine? (**Without cloning from the repo in a virtual environment as you mentioned in the above comment**)",
"If you want to edit the model file then I heavily recommend you clone the repo and install it in an editable way `pip install -e <path_to_clone>`",
"@LysandreJik Thanks.\r\nActually, I tried to edit the **configuration_t5.py** file.\r\nThis is the code I want to run for model creation\r\n```\r\nimport torch\r\nfrom transformers_master.src.transformers import T5ForConditionalGeneration\r\nmodel_name = \"t5-small\"\r\ntorch_device = 'cuda' if torch.cuda.is_available() else 'cpu'\r\nmodel = T5ForConditionalGeneration.from_pretrained(model_name).to(torch_device)\r\n```\r\nThe default initialization parameters of the **T5Config** class are as follows\r\n```\r\n self,\r\n vocab_size=32128,\r\n d_model=512,\r\n d_kv=64,\r\n d_ff=2048,\r\n num_layers=6,\r\n num_decoder_layers=None,\r\n num_heads=8,\r\n relative_attention_num_buckets=32,\r\n dropout_rate=0.1,\r\n layer_norm_epsilon=1e-6,\r\n initializer_factor=1.0,\r\n feed_forward_proj=\"relu\",\r\n is_encoder_decoder=True,\r\n use_cache=True,\r\n pad_token_id=0,\r\n eos_token_id=1,\r\n **kwargs\r\n```\r\n\r\nI changed the **d_model, d_kv,d_ff**, and **num_heads** from this configuration_t5.py file as follows.\r\n\r\n```\r\n d_model=256,\r\n d_kv=32,\r\n d_ff=1024,\r\n num_heads=6,\r\n``` \r\n\r\nBut after changing the above parameters, It showing the error given below\r\n```\r\nRuntimeError: Error(s) in loading state_dict for T5ForConditionalGeneration:\r\n\tsize mismatch for shared.weight: copying a param with shape torch.Size([32128, 512]) from checkpoint, the shape in current model is torch.Size([32128, 256]).\r\n\tsize mismatch for encoder.block.0.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.0.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.0.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.0.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 8]) from checkpoint, the shape in current model is torch.Size([32, 6]).\r\n\tsize mismatch for encoder.block.0.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.block.0.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for encoder.block.0.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for encoder.block.0.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.block.1.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.1.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.1.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.1.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for encoder.block.1.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.block.1.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for encoder.block.1.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for encoder.block.1.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.block.2.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.2.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.2.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.2.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for encoder.block.2.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.block.2.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for encoder.block.2.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for encoder.block.2.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.block.3.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.3.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.3.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.3.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for encoder.block.3.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.block.3.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for encoder.block.3.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for encoder.block.3.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.block.4.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.4.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.4.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.4.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for encoder.block.4.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.block.4.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for encoder.block.4.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for encoder.block.4.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.block.5.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.5.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.5.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for encoder.block.5.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for encoder.block.5.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.block.5.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for encoder.block.5.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for encoder.block.5.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for encoder.final_layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.0.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.0.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.0.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.0.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 8]) from checkpoint, the shape in current model is torch.Size([32, 6]).\r\n\tsize mismatch for decoder.block.0.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.0.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.0.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.0.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.0.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.0.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.0.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for decoder.block.0.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for decoder.block.0.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.1.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.1.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.1.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.1.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.1.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.1.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.1.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.1.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.1.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.1.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.1.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for decoder.block.1.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for decoder.block.1.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.2.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.2.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.2.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.2.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.2.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.2.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.2.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.2.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.2.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.2.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.2.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for decoder.block.2.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for decoder.block.2.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.3.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.3.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.3.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.3.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.3.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.3.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.3.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.3.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.3.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.3.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.3.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for decoder.block.3.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for decoder.block.3.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.4.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.4.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.4.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.4.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.4.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.4.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.4.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.4.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.4.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.4.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.4.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for decoder.block.4.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for decoder.block.4.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.5.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.5.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.5.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.5.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.5.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.5.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.5.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.5.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).\r\n\tsize mismatch for decoder.block.5.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).\r\n\tsize mismatch for decoder.block.5.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.block.5.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).\r\n\tsize mismatch for decoder.block.5.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).\r\n\tsize mismatch for decoder.block.5.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n\tsize mismatch for decoder.final_layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).\r\n```\r\n\r\n\r\nSo where I missed? How can I change the model configuration parameters- **d_model, d_kv,d_ff** and **num_heads**?"
] | 1,600 | 1,632 | 1,600 | NONE | null | Hi,
I want to add a new layer in T5Block,
However, I want to initialize all original parameters with pre-trained T5 and the newly added ones randomly.
Can someone guide me how that's possible or point me to the right direction?
Thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7360/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7359/comments | https://api.github.com/repos/huggingface/transformers/issues/7359/events | https://github.com/huggingface/transformers/pull/7359 | 707,911,464 | MDExOlB1bGxSZXF1ZXN0NDkyMjMwODYy | 7,359 | Update modeling_tf_longformer.py | {
"login": "Line290",
"id": 26078517,
"node_id": "MDQ6VXNlcjI2MDc4NTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/26078517?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Line290",
"html_url": "https://github.com/Line290",
"followers_url": "https://api.github.com/users/Line290/followers",
"following_url": "https://api.github.com/users/Line290/following{/other_user}",
"gists_url": "https://api.github.com/users/Line290/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Line290/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Line290/subscriptions",
"organizations_url": "https://api.github.com/users/Line290/orgs",
"repos_url": "https://api.github.com/users/Line290/repos",
"events_url": "https://api.github.com/users/Line290/events{/privacy}",
"received_events_url": "https://api.github.com/users/Line290/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=h1) Report\n> Merging [#7359](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38f17037957d325b5540a8031f065e6f23c9e265?el=desc) will **increase** coverage by `2.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7359 +/- ##\n==========================================\n+ Coverage 77.54% 79.55% +2.00% \n==========================================\n Files 181 181 \n Lines 34851 34851 \n==========================================\n+ Hits 27024 27724 +700 \n+ Misses 7827 7127 -700 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `98.67% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.03% <0.00%> (-73.03%)` | :arrow_down: |\n| [src/transformers/retrieval\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9yZXRyaWV2YWxfcmFnLnB5) | `28.48% <0.00%> (-62.80%)` | :arrow_down: |\n| [src/transformers/modeling\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.39% <0.00%> (-51.59%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `83.74% <0.00%> (-14.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.16% <0.00%> (-2.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.62% <0.00%> (-1.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=footer). Last update [38f1703...90b186a](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | correct a very small mistake
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7359/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7359",
"html_url": "https://github.com/huggingface/transformers/pull/7359",
"diff_url": "https://github.com/huggingface/transformers/pull/7359.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7359.patch",
"merged_at": 1600940250000
} |
https://api.github.com/repos/huggingface/transformers/issues/7358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7358/comments | https://api.github.com/repos/huggingface/transformers/issues/7358/events | https://github.com/huggingface/transformers/issues/7358 | 707,850,089 | MDU6SXNzdWU3MDc4NTAwODk= | 7,358 | Example for T5 model from doc is not working. | {
"login": "jayendra13",
"id": 651057,
"node_id": "MDQ6VXNlcjY1MTA1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/651057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayendra13",
"html_url": "https://github.com/jayendra13",
"followers_url": "https://api.github.com/users/jayendra13/followers",
"following_url": "https://api.github.com/users/jayendra13/following{/other_user}",
"gists_url": "https://api.github.com/users/jayendra13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayendra13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayendra13/subscriptions",
"organizations_url": "https://api.github.com/users/jayendra13/orgs",
"repos_url": "https://api.github.com/users/jayendra13/repos",
"events_url": "https://api.github.com/users/jayendra13/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayendra13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi, you're using the wrong model. `labels` cannot work with the `TFT5Model` as that's just the base model. You're probably looking for `TFT5ForConditionalGeneration`, which is the T5 base model with a language modeling head:\r\n\r\n```py\r\nimport tensorflow as tf\r\nfrom transformers import TFT5ForConditionalGeneration\r\nmodel = TFT5ForConditionalGeneration.from_pretrained('t5-small')\r\ninputs = tf.constant([[ 1, 30, 4, 19, 7, 41, 20, 4, 25, 40, 13, 46, 27, 54, 25, 2]])\r\nprint(model(inputs, labels=inputs))\r\n```\r\n\r\noutputs:\r\n```\r\n(<tf.Tensor: shape=(16,), dtype=float32, numpy=\r\narray([7.5659113 , 4.1611323 , 0.7870086 , 0.19761924, 0.10837179,\r\n 1.0610694 , 0.53702176, 0.01974043, 0.1657649 , 0.07946267,\r\n 0.19164713, 0.10359508, 0.844127 , 0.04230493, 0.0681754 ,\r\n 0.1965235 ], dtype=float32)>, <tf.Tensor: shape=(1, 16, 32128), dtype=float32, numpy=\r\narray([[[-14.546758 , -7.1824822\r\n[...]\r\n```",
"I want to train a model for language translation would TFT5ForConditionalGeneration work?",
"Even this case is also not working.\r\n```\r\nmodel = TFT5Model.from_pretrained('t5-small')\r\nmodel(tf.constant([[ 1, 30, 4, 19, 7, 41, 20, 4, 25, 40, 13, 46, 27, 54, 25, 2]]))\r\n```\r\nwhich is throwing the same error as stated above.",
"Indeed, the error message is wrong here but as T5 is a seq2seq model it requires both `input_ids` and `decoder_input_ids`. We should update the docstrings/error message, but you can have more information [here, in the docs](https://huggingface.co/transformers/model_doc/t5.html#tft5model).\r\n\r\ncc @patrickvonplaten ",
"Gotcha! Thanks for clarifying that."
] | 1,600 | 1,601 | 1,601 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): No
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using T5 (TFT5) and following the [documentation](https://huggingface.co/transformers/model_doc/t5.html):
The problem arises when using:
* the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
import tensorflow as tf
from transformers import TFT5Model
model = TFT5Model.from_pretrained('t5-small')
inputs = tf.constant([[ 1, 30, 4, 19, 7, 41, 20, 4, 25, 40, 13, 46, 27, 54, 25, 2]])
print(model(inputs, labels=inputs))
```
The above snippet throws the error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-98-0a790979b424> in <module>()
----> 1 model(tf.constant([[ 1, 30, 4, 19, 7, 41, 20, 4, 25, 40, 13, 46, 27, 54, 25, 2]]))
3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
983
984 with ops.enable_auto_cast_variables(self._compute_dtype_object):
--> 985 outputs = call_fn(inputs, *args, **kwargs)
986
987 if self._activity_regularizer:
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_t5.py in call(self, inputs, attention_mask, encoder_outputs, inputs_embeds, head_mask, past_key_values, decoder_input_ids, decoder_attention_mask, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict, training, **kwargs)
1104 output_hidden_states,
1105 ],
-> 1106 training=training,
1107 )
1108 past = (
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
983
984 with ops.enable_auto_cast_variables(self._compute_dtype_object):
--> 985 outputs = call_fn(inputs, *args, **kwargs)
986
987 if self._activity_regularizer:
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_t5.py in call(self, inputs, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, training, **kwargs)
646 input_shape = shape_list(inputs_embeds)[:-1]
647 else:
--> 648 raise ValueError("You have to specify either inputs or inputs_embeds")
649
650 if inputs_embeds is None:
ValueError: You have to specify either inputs or inputs_embeds
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
This should instead return the instance of `TFSeq2SeqModelOutput` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7358/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7357/comments | https://api.github.com/repos/huggingface/transformers/issues/7357/events | https://github.com/huggingface/transformers/issues/7357 | 707,827,308 | MDU6SXNzdWU3MDc4MjczMDg= | 7,357 | how can i convert bert pytorch to tf2 ? | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can save it and reload it:\r\n\r\n```py\r\npytorch_model.save_pretrained(\"here\")\r\ntf_model = TFBertModel.from_pretrained(\"here\")\r\n```",
"> You can save it and reload it:\r\n> \r\n> ```python\r\n> pytorch_model.save_pretrained(\"here\")\r\n> tf_model = TFBertModel.from_pretrained(\"here\")\r\n> ```\r\n\r\ni can not convert pytorch bert model to tenforflow 2.3 ? can you help me ? @LysandreJik ",
"Please provide more information, can you respect the issue template? What exactly are you trying to do? Do you have a PyTorch model? How did you get it, is it one of the checkpoints on the hub, is it fine-tuned?\r\n\r\nYou want to convert it to our TensorFlow API? \r\n\r\nPlease provide more details for us to help you better."
] | 1,600 | 1,601 | 1,600 | NONE | null | how can i convert bert pytorch to tf2 ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7357/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7356/comments | https://api.github.com/repos/huggingface/transformers/issues/7356/events | https://github.com/huggingface/transformers/pull/7356 | 707,804,696 | MDExOlB1bGxSZXF1ZXN0NDkyMTQzODI3 | 7,356 | Fix eval to compute rouge correctly for rouge_score | {
"login": "swethmandava",
"id": 17828952,
"node_id": "MDQ6VXNlcjE3ODI4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/17828952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swethmandava",
"html_url": "https://github.com/swethmandava",
"followers_url": "https://api.github.com/users/swethmandava/followers",
"following_url": "https://api.github.com/users/swethmandava/following{/other_user}",
"gists_url": "https://api.github.com/users/swethmandava/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swethmandava/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swethmandava/subscriptions",
"organizations_url": "https://api.github.com/users/swethmandava/orgs",
"repos_url": "https://api.github.com/users/swethmandava/repos",
"events_url": "https://api.github.com/users/swethmandava/events{/privacy}",
"received_events_url": "https://api.github.com/users/swethmandava/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, for the code quality test to pass you should run `make style` (to apply the style changes) and check with `make quality` (to make sure there is none left) at the root of your `transformers` directory.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=h1) Report\n> Merging [#7356](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38f17037957d325b5540a8031f065e6f23c9e265?el=desc) will **decrease** coverage by `1.48%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7356 +/- ##\n==========================================\n- Coverage 77.54% 76.05% -1.49% \n==========================================\n Files 181 181 \n Lines 34851 34851 \n==========================================\n- Hits 27024 26507 -517 \n- Misses 7827 8344 +517 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| ... and [22 more](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=footer). Last update [38f1703...c7e4959](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for the contribution @swethmandava !\r\n\r\nI'm going to run this over the weekend to see how metrics have changed.\r\nThen if all goes well I'll merge this on Monday.\r\nIf you don't hear from me by Tuesday, please ping :)",
"I cleaned up and added tests in https://github.com/huggingface/transformers/pull/7410\r\n\r\nMetrics look good, let me know what you think of the new code! I will add you as PR coauthor!"
] | 1,600 | 1,601 | 1,601 | CONTRIBUTOR | null | Fixes #6808
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7356/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7356",
"html_url": "https://github.com/huggingface/transformers/pull/7356",
"diff_url": "https://github.com/huggingface/transformers/pull/7356.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7356.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7355/comments | https://api.github.com/repos/huggingface/transformers/issues/7355/events | https://github.com/huggingface/transformers/pull/7355 | 707,757,483 | MDExOlB1bGxSZXF1ZXN0NDkyMTA2MjI4 | 7,355 | Add token_type_ids to prepare_inputs_for_generation for gpt/gpt2 | {
"login": "bhedayat",
"id": 13006899,
"node_id": "MDQ6VXNlcjEzMDA2ODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/13006899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhedayat",
"html_url": "https://github.com/bhedayat",
"followers_url": "https://api.github.com/users/bhedayat/followers",
"following_url": "https://api.github.com/users/bhedayat/following{/other_user}",
"gists_url": "https://api.github.com/users/bhedayat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhedayat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhedayat/subscriptions",
"organizations_url": "https://api.github.com/users/bhedayat/orgs",
"repos_url": "https://api.github.com/users/bhedayat/repos",
"events_url": "https://api.github.com/users/bhedayat/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhedayat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe related to https://github.com/huggingface/transformers/pull/6601 and https://github.com/huggingface/transformers/pull/7552",
"Yes seems to be related to both. https://github.com/huggingface/transformers/pull/7355 doesn't seem to have token_type_ids passed in though, but if those PRs get merged in I'll close mine",
"We have the same problem here as explained in https://github.com/huggingface/transformers/pull/6601#issuecomment-708029212. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,600 | 1,619 | 1,619 | NONE | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7355/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7355",
"html_url": "https://github.com/huggingface/transformers/pull/7355",
"diff_url": "https://github.com/huggingface/transformers/pull/7355.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7355.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7354/comments | https://api.github.com/repos/huggingface/transformers/issues/7354/events | https://github.com/huggingface/transformers/issues/7354 | 707,725,658 | MDU6SXNzdWU3MDc3MjU2NTg= | 7,354 | Faster Pegasus tokenizer tests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
},
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Any interest @stas00 ?",
"Yes, please, you can assign this to me, but most likely will be able to start on it in a few weeks when I have free time.",
"We can start this, but if we do we should wait for @thomwolf 's fast tokenizer PR to merge before we merge the fix.",
"This is unblocked, thom merged!"
] | 1,600 | 1,602 | 1,602 | CONTRIBUTOR | null | Current test_tokenization_pegasus.py takes more than a minute to run because it uses a full size tokenizer [here](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_pegasus.py#L19)
It should use "fixtures/test_sentencepiece.model" like `tests/test_tokenization_t5.py`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7354/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7353/comments | https://api.github.com/repos/huggingface/transformers/issues/7353/events | https://github.com/huggingface/transformers/pull/7353 | 707,693,361 | MDExOlB1bGxSZXF1ZXN0NDkyMDUyMTQ4 | 7,353 | enable add_tokens for mbart tokenizer | {
"login": "znculee",
"id": 15342165,
"node_id": "MDQ6VXNlcjE1MzQyMTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/15342165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/znculee",
"html_url": "https://github.com/znculee",
"followers_url": "https://api.github.com/users/znculee/followers",
"following_url": "https://api.github.com/users/znculee/following{/other_user}",
"gists_url": "https://api.github.com/users/znculee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/znculee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/znculee/subscriptions",
"organizations_url": "https://api.github.com/users/znculee/orgs",
"repos_url": "https://api.github.com/users/znculee/repos",
"events_url": "https://api.github.com/users/znculee/events{/privacy}",
"received_events_url": "https://api.github.com/users/znculee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,607 | 1,607 | NONE | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7222
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7353/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7353",
"html_url": "https://github.com/huggingface/transformers/pull/7353",
"diff_url": "https://github.com/huggingface/transformers/pull/7353.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7353.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7352 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7352/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7352/comments | https://api.github.com/repos/huggingface/transformers/issues/7352/events | https://github.com/huggingface/transformers/pull/7352 | 707,680,277 | MDExOlB1bGxSZXF1ZXN0NDkyMDQxMDI2 | 7,352 | Make PyTorch model files independent from each other | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=h1) Report\n> Merging [#7352](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129fdae04033fe4adfe013b734deaec6ec34ae2e?el=desc) will **increase** coverage by `1.25%`.\n> The diff coverage is `71.95%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7352 +/- ##\n==========================================\n+ Coverage 76.68% 77.93% +1.25% \n==========================================\n Files 181 181 \n Lines 34851 35140 +289 \n==========================================\n+ Hits 26724 27385 +661 \n+ Misses 8127 7755 -372 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.69% <22.72%> (-52.92%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <50.00%> (-2.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_retribert.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZXRyaWJlcnQucHk=) | `34.24% <50.00%> (ø)` | |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.38% <90.00%> (-0.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `84.98% <90.95%> (+2.79%)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `84.16% <97.14%> (+0.69%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.36% <100.00%> (+0.08%)` | :arrow_up: |\n| [src/transformers/modeling\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mdW5uZWwucHk=) | `86.74% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `94.45% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `90.73% <100.00%> (-0.04%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=footer). Last update [129fdae...90a918f](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,600 | 1,600 | COLLABORATOR | null | As discussed after the survey and expressed in the project README, our goal is to have independent model files even if it means some code is duplicated. This PR fixes this for all PyTorch models except:
- the full subclasses (CamemBERT, FlauBERT, XLM-RoBERTa),
- the BART-like models (BART, mBART, marian, Pegasus)
- the "composite" models (BertGeneration, DPR and RetriBERT).
The first ones should stay as is, as we discussed internally, the second ones will be dealt with in another PR and I personally think the last ones (which directly import `BertModel`) should also stay as is.
This leverages the script introduced in #7219 to make sure the identical copies stay True to the original.
Also, as discussed with Lysandre, I removed the XxxLayerNorm when it was just `nn.LayerNorm`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7352/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7352/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7352",
"html_url": "https://github.com/huggingface/transformers/pull/7352",
"diff_url": "https://github.com/huggingface/transformers/pull/7352.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7352.patch",
"merged_at": 1600952034000
} |
https://api.github.com/repos/huggingface/transformers/issues/7351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7351/comments | https://api.github.com/repos/huggingface/transformers/issues/7351/events | https://github.com/huggingface/transformers/issues/7351 | 707,644,479 | MDU6SXNzdWU3MDc2NDQ0Nzk= | 7,351 | generic text classification with TensorFlow error (AttributeError: 'TFTrainingArguments' object has no attribute 'args') | {
"login": "c-col",
"id": 12224330,
"node_id": "MDQ6VXNlcjEyMjI0MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/12224330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c-col",
"html_url": "https://github.com/c-col",
"followers_url": "https://api.github.com/users/c-col/followers",
"following_url": "https://api.github.com/users/c-col/following{/other_user}",
"gists_url": "https://api.github.com/users/c-col/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c-col/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c-col/subscriptions",
"organizations_url": "https://api.github.com/users/c-col/orgs",
"repos_url": "https://api.github.com/users/c-col/repos",
"events_url": "https://api.github.com/users/c-col/events{/privacy}",
"received_events_url": "https://api.github.com/users/c-col/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\n\nThis is fixed in master.",
" @jplu Sorry, but I'm facing the same issue, and have version 3.2 installed. Can you please elaborate on how I might fix this? Thanks.",
"@sunnyville01 Just install the version on master with `pip install git+https://github.com/huggingface/transformers.git`",
"@jplu Thanks, that fixed it.",
"I am still facing this issue on colab with \r\n!pip install git+https://github.com/huggingface/transformers.git\r\n\r\n`---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-43-d201a6fb0a8d> in <module>()\r\n 17 learning_rate=LEARNING_RATE\r\n 18 )\r\n---> 19 with training_argsTF.strategy.scope():\r\n 20 modelTF = TFAutoModelForSequenceClassification.from_pretrained(\r\n 21 model_args['model_name'],\r\n\r\n4 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/training_args_tf.py in _setup_strategy(self)\r\n 120 logger.info(\"Tensorflow: setting up strategy\")\r\n 121 \r\n--> 122 if self.args.xla:\r\n 123 tf.config.optimizer.set_jit(True)\r\n 124 \r\n\r\nAttributeError: 'TFTrainingArguments' object has no attribute 'args'`",
"Something must be wrong with your install process, because this bug is fixed in master.",
"My bad, did not notice \"requirements already met message\", updated to \r\n!pip install --upgrade git+https://github.com/huggingface/transformers.git\r\n\r\nNo more issue! Sorry .",
"> Something must be wrong with your install process, because this bug is fixed in master.\r\n\r\nThe error seems to persist with me. I installed using `!pip install git+https://github.com/huggingface/transformers.git` and got the same error `TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'`\r\n\r\nHere's is a colab notebook, you can do runtime-> run all , and see the output of the last cell. \r\n\r\nhttps://colab.research.google.com/drive/1r3XCKYA8RBtfYmU2jqHVJT-uTt1ii04S?usp=sharing",
"@jplu I'm also getting the same error `TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'`, and I also ran the colab from @Santosh-Gupta and the error happened too. \r\nMy local environment is also based on transformer's master branch. ",
"@pvcastro Can you open a new issue please with all the details to be able for us to reproduce it. This thread is closed and about a different one."
] | 1,600 | 1,601 | 1,600 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Linux-4.15.0-1091-oem-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@jplu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): bert-base-multilingual-uncased
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Running run_tf_text_classification.py with flags from the example in the "Run generic text classification script in TensorFlow" section of examples/text-classification
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Text classification dataset for classifying answers to questions. Using 3 CSVs (train, dev, and test) that each have headers (class, text) and columns containing class labels (int) and questions (strings). There are no commas present in the questions, for reference.
## To reproduce
Steps to reproduce the behavior:
1. Call run_tf_text_classification.py with flags from the example in the "Run generic text classification script in TensorFlow" section of examples/text-classification:
```python3
python run_tf_text_classification.py \
--train_file train.csv \
--dev_file dev.csv \
--test_file test.csv \
--label_column_id 0 \
--model_name_or_path bert-base-multilingual-uncased \
--output_dir model \
--num_train_epochs 4 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 32 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 10 \
--evaluate_during_training \
--save_steps 10 \
--overwrite_output_dir \
--max_seq_length 128
```
2. Error is encountered:
```python3
Traceback (most recent call last):
File "run_tf_text_classification.py", line 283, in <module>
main()
File "run_tf_text_classification.py", line 199, in main
training_args.n_replicas,
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/transformers/file_utils.py", line 936, in wrapper
return func(*args, **kwargs)
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/transformers/training_args_tf.py", line 180, in n_replicas
return self._setup_strategy.num_replicas_in_sync
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/transformers/file_utils.py", line 914, in __get__
cached = self.fget(obj)
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/transformers/file_utils.py", line 936, in wrapper
return func(*args, **kwargs)
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/transformers/training_args_tf.py", line 122, in _setup_strategy
if self.args.xla:
AttributeError: 'TFTrainingArguments' object has no attribute 'args'
```
3. If the logger.info call is commented out (lines 197-202), the above error is prevented but another error is encountered:
```python3
Traceback (most recent call last):
File "run_tf_text_classification.py", line 282, in <module>
main()
File "run_tf_text_classification.py", line 221, in main
max_seq_length=data_args.max_seq_length,
File "run_tf_text_classification.py", line 42, in get_tfds
ds = datasets.load_dataset("csv", data_files=files)
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/datasets/load.py", line 604, in load_dataset
**config_kwargs,
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/datasets/builder.py", line 158, in __init__
**config_kwargs,
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/datasets/builder.py", line 269, in _create_builder_config
for key in sorted(data_files.keys()):
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is a pip freeze:
```python3
absl-py==0.10.0
astunparse==1.6.3
cachetools==4.1.1
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
dataclasses==0.7
datasets==1.0.2
dill==0.3.2
filelock==3.0.12
gast==0.3.3
google-auth==1.21.3
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.32.0
h5py==2.10.0
idna==2.10
importlib-metadata==2.0.0
joblib==0.16.0
Keras-Preprocessing==1.1.2
Markdown==3.2.2
numpy==1.18.5
oauthlib==3.1.0
opt-einsum==3.3.0
packaging==20.4
pandas==1.1.2
protobuf==3.13.0
pyarrow==1.0.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==2.4.7
python-dateutil==2.8.1
pytz==2020.1
regex==2020.7.14
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
sacremoses==0.0.43
scipy==1.4.1
sentencepiece==0.1.91
six==1.15.0
tensorboard==2.3.0
tensorboard-plugin-wit==1.7.0
tensorflow==2.3.0
tensorflow-estimator==2.3.0
termcolor==1.1.0
tokenizers==0.8.1rc2
tqdm==4.49.0
transformers==3.2.0
urllib3==1.25.10
Werkzeug==1.0.1
wrapt==1.12.1
xxhash==2.0.0
zipp==3.2.0
```
## Expected behavior
Model begins to train on custom dataset.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7351/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7350 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7350/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7350/comments | https://api.github.com/repos/huggingface/transformers/issues/7350/events | https://github.com/huggingface/transformers/pull/7350 | 707,643,936 | MDExOlB1bGxSZXF1ZXN0NDkyMDEwNjgz | 7,350 | Expand a bit the documentation doc | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=h1) Report\n> Merging [#7350](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129fdae04033fe4adfe013b734deaec6ec34ae2e?el=desc) will **increase** coverage by `1.40%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7350 +/- ##\n==========================================\n+ Coverage 76.68% 78.08% +1.40% \n==========================================\n Files 181 181 \n Lines 34851 34851 \n==========================================\n+ Hits 26724 27214 +490 \n+ Misses 8127 7637 -490 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/tokenization\\_phobert.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGhvYmVydC5weQ==) | `21.80% <0.00%> (-61.66%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: |\n| [src/transformers/configuration\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `82.59% <0.00%> (-13.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=footer). Last update [129fdae...f940cdf](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,600 | 1,600 | COLLABORATOR | null | Add a few more instructions for people who do read the doc :-) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7350/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7350",
"html_url": "https://github.com/huggingface/transformers/pull/7350",
"diff_url": "https://github.com/huggingface/transformers/pull/7350.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7350.patch",
"merged_at": 1600936459000
} |
https://api.github.com/repos/huggingface/transformers/issues/7349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7349/comments | https://api.github.com/repos/huggingface/transformers/issues/7349/events | https://github.com/huggingface/transformers/pull/7349 | 707,627,738 | MDExOlB1bGxSZXF1ZXN0NDkxOTk3MjE3 | 7,349 | Create README.md | {
"login": "abedkhooli",
"id": 11407254,
"node_id": "MDQ6VXNlcjExNDA3MjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11407254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abedkhooli",
"html_url": "https://github.com/abedkhooli",
"followers_url": "https://api.github.com/users/abedkhooli/followers",
"following_url": "https://api.github.com/users/abedkhooli/following{/other_user}",
"gists_url": "https://api.github.com/users/abedkhooli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abedkhooli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abedkhooli/subscriptions",
"organizations_url": "https://api.github.com/users/abedkhooli/orgs",
"repos_url": "https://api.github.com/users/abedkhooli/repos",
"events_url": "https://api.github.com/users/abedkhooli/events{/privacy}",
"received_events_url": "https://api.github.com/users/abedkhooli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,600 | 1,601 | 1,601 | CONTRIBUTOR | null | Model card for akhooli/personachat-arabic
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7349/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7349",
"html_url": "https://github.com/huggingface/transformers/pull/7349",
"diff_url": "https://github.com/huggingface/transformers/pull/7349.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7349.patch",
"merged_at": 1601556532000
} |
https://api.github.com/repos/huggingface/transformers/issues/7348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7348/comments | https://api.github.com/repos/huggingface/transformers/issues/7348/events | https://github.com/huggingface/transformers/pull/7348 | 707,612,995 | MDExOlB1bGxSZXF1ZXN0NDkxOTg0ODM2 | 7,348 | Clean RAG docs and template docs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=h1) Report\n> Merging [#7348](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129fdae04033fe4adfe013b734deaec6ec34ae2e?el=desc) will **decrease** coverage by `0.92%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7348 +/- ##\n==========================================\n- Coverage 76.68% 75.75% -0.93% \n==========================================\n Files 181 181 \n Lines 34851 34853 +2 \n==========================================\n- Hits 26724 26402 -322 \n- Misses 8127 8451 +324 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `76.98% <ø> (ø)` | |\n| [src/transformers/retrieval\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9yZXRyaWV2YWxfcmFnLnB5) | `91.27% <ø> (ø)` | |\n| [src/transformers/tokenization\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmFnLnB5) | `71.11% <100.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=footer). Last update [129fdae...930dd4b](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,600 | 1,600 | COLLABORATOR | null | Followup from #7345, this cleans up the documentation for RAG (since it was merged while I was working) and update the templates to the new docstrings. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7348/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7348",
"html_url": "https://github.com/huggingface/transformers/pull/7348",
"diff_url": "https://github.com/huggingface/transformers/pull/7348.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7348.patch",
"merged_at": 1600953881000
} |
https://api.github.com/repos/huggingface/transformers/issues/7347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7347/comments | https://api.github.com/repos/huggingface/transformers/issues/7347/events | https://github.com/huggingface/transformers/issues/7347 | 707,558,740 | MDU6SXNzdWU3MDc1NTg3NDA= | 7,347 | [s2s] can distributed eval intiate model download on each rank | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"No it can't `from_pretrained` uses `FileLock`"
] | 1,600 | 1,602 | 1,602 | CONTRIBUTOR | null | + `from_pretrained` uses a FileLock to avoid this, but I wonder if there is a race condition.
+ Verify, then fix. Fix non-trivial because have to block other processes.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7347/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7346/comments | https://api.github.com/repos/huggingface/transformers/issues/7346/events | https://github.com/huggingface/transformers/issues/7346 | 707,553,010 | MDU6SXNzdWU3MDc1NTMwMTA= | 7,346 | Difference between bart-large and bart-large-cnn vocabulary | {
"login": "swethmandava",
"id": 17828952,
"node_id": "MDQ6VXNlcjE3ODI4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/17828952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swethmandava",
"html_url": "https://github.com/swethmandava",
"followers_url": "https://api.github.com/users/swethmandava/followers",
"following_url": "https://api.github.com/users/swethmandava/following{/other_user}",
"gists_url": "https://api.github.com/users/swethmandava/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swethmandava/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swethmandava/subscriptions",
"organizations_url": "https://api.github.com/users/swethmandava/orgs",
"repos_url": "https://api.github.com/users/swethmandava/repos",
"events_url": "https://api.github.com/users/swethmandava/events{/privacy}",
"received_events_url": "https://api.github.com/users/swethmandava/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The reason is the mask token, see https://github.com/huggingface/transformers/issues/3108.\r\nYou could try to use the resize_token_embeddings method, but even easier would be to pass the config changes you want to init\r\n```python\r\nBartForConditionalGeneration.from_pretrained('bart-large', num_beams=4, min_length=56, max_length=142, length_penalty=142, ...)\r\n```\r\n\r\n"
] | 1,600 | 1,603 | 1,603 | CONTRIBUTOR | null | Trying to finetune from pretrained bart checkpoint as follows:
```
config = BartConfig(**json.load(open(args.config_path, "r"))) #pointing to bart-large-cnn/config.json
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large', config=config) #use pretrained bart model's weights
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
```
but since facebook/bart-large and facebook/bart-large-cnn have different vocab size, it fails. What's the reason behind different vocab sizes? How can I use pretrained bart for finetuning - should I modify bart-large-cnn's config to use the same vocab size as bart-large?
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7346/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7345/comments | https://api.github.com/repos/huggingface/transformers/issues/7345/events | https://github.com/huggingface/transformers/pull/7345 | 707,513,036 | MDExOlB1bGxSZXF1ZXN0NDkxOTAxODAx | 7,345 | Models doc | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=h1) Report\n> Merging [#7345](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28cf873036d078b47fb9dd38ac3421a7c874da44?el=desc) will **increase** coverage by `2.47%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7345 +/- ##\n==========================================\n+ Coverage 76.58% 79.05% +2.47% \n==========================================\n Files 181 181 \n Lines 34828 34828 \n==========================================\n+ Hits 26674 27535 +861 \n+ Misses 8154 7293 -861 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnRfZ2VuZXJhdGlvbi5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rwci5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Z1bm5lbC5weQ==) | `100.00% <ø> (ø)` | |\n| ... and [140 more](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=footer). Last update [28cf873...fb4ea94](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,600 | 1,600 | COLLABORATOR | null | Do not review this PR unless you're masochistic or @LysandreJik.
This PR does a big clean-up of all models/tokenizers/config docstrings. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7345/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7345",
"html_url": "https://github.com/huggingface/transformers/pull/7345",
"diff_url": "https://github.com/huggingface/transformers/pull/7345.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7345.patch",
"merged_at": 1600881646000
} |
https://api.github.com/repos/huggingface/transformers/issues/7344 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7344/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7344/comments | https://api.github.com/repos/huggingface/transformers/issues/7344/events | https://github.com/huggingface/transformers/pull/7344 | 707,510,707 | MDExOlB1bGxSZXF1ZXN0NDkxODk5ODQ2 | 7,344 | Remove reference to args in XLA check | {
"login": "ZeroCool2u",
"id": 3961523,
"node_id": "MDQ6VXNlcjM5NjE1MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3961523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZeroCool2u",
"html_url": "https://github.com/ZeroCool2u",
"followers_url": "https://api.github.com/users/ZeroCool2u/followers",
"following_url": "https://api.github.com/users/ZeroCool2u/following{/other_user}",
"gists_url": "https://api.github.com/users/ZeroCool2u/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZeroCool2u/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZeroCool2u/subscriptions",
"organizations_url": "https://api.github.com/users/ZeroCool2u/orgs",
"repos_url": "https://api.github.com/users/ZeroCool2u/repos",
"events_url": "https://api.github.com/users/ZeroCool2u/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZeroCool2u/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=h1) Report\n> Merging [#7344](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28cf873036d078b47fb9dd38ac3421a7c874da44?el=desc) will **increase** coverage by `0.23%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7344 +/- ##\n==========================================\n+ Coverage 76.58% 76.82% +0.23% \n==========================================\n Files 181 181 \n Lines 34828 34828 \n==========================================\n+ Hits 26674 26757 +83 \n+ Misses 8154 8071 -83 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `42.64% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-51.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.17% <0.00%> (-15.39%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=footer). Last update [28cf873...82d7dee](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for the quick review @LysandreJik and all this excellent work! Didn't realize the Hugging Face team is based in NYC. If the offices ever actually open again and your team is interested, @mdvandergon and I would be stoked to host you for lunch at FRBNY. ",
"@jplu No worries, happens to the best of us! Thanks for all your hard work!",
"That's good to know, thanks for the offer @ZeroCool2u, @mdvandergon! "
] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | Previously, the TFTrainingArguments object did a check to see if XLA was enabled, but did this by referencing `self.args.xla`, when it should be `self.xla`, because it is the args object. This can be verified a few lines above, where the XLA field is set.
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7343 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7344/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7344",
"html_url": "https://github.com/huggingface/transformers/pull/7344",
"diff_url": "https://github.com/huggingface/transformers/pull/7344.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7344.patch",
"merged_at": 1600883781000
} |
https://api.github.com/repos/huggingface/transformers/issues/7343 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7343/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7343/comments | https://api.github.com/repos/huggingface/transformers/issues/7343/events | https://github.com/huggingface/transformers/issues/7343 | 707,510,167 | MDU6SXNzdWU3MDc1MTAxNjc= | 7,343 | AttributeError: 'TFTrainingArguments' object has no attribute 'args' | {
"login": "ZeroCool2u",
"id": 3961523,
"node_id": "MDQ6VXNlcjM5NjE1MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3961523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZeroCool2u",
"html_url": "https://github.com/ZeroCool2u",
"followers_url": "https://api.github.com/users/ZeroCool2u/followers",
"following_url": "https://api.github.com/users/ZeroCool2u/following{/other_user}",
"gists_url": "https://api.github.com/users/ZeroCool2u/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZeroCool2u/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZeroCool2u/subscriptions",
"organizations_url": "https://api.github.com/users/ZeroCool2u/orgs",
"repos_url": "https://api.github.com/users/ZeroCool2u/repos",
"events_url": "https://api.github.com/users/ZeroCool2u/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZeroCool2u/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
Trainer: @sgugger
tensorflow: @jplu
## Information
Model I am using (Bert, XLNet ...): `distilbert-base-uncased`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
IMDB Sequence Classification
## To reproduce
Steps to reproduce the behavior:
1. Follow the [fine-tuning tutorial here and use TensorFlow](https://huggingface.co/transformers/master/custom_datasets.html#fine-tuning-with-trainer)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
AttributeError Traceback (most recent call last)
<ipython-input-10-c5306faf2c2f> in <module>()
12 )
13
---> 14 with training_args.strategy.scope():
15 model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
16
4 frames
/usr/local/lib/python3.6/dist-packages/transformers/training_args_tf.py in _setup_strategy(self)
120 logger.info("Tensorflow: setting up strategy")
121
--> 122 if self.args.xla:
123 tf.config.optimizer.set_jit(True)
124
AttributeError: 'TFTrainingArguments' object has no attribute 'args'
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7343/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7342 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7342/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7342/comments | https://api.github.com/repos/huggingface/transformers/issues/7342/events | https://github.com/huggingface/transformers/issues/7342 | 707,481,867 | MDU6SXNzdWU3MDc0ODE4Njc= | 7,342 | CentOS Error installing Transformers | {
"login": "KimYar",
"id": 26006890,
"node_id": "MDQ6VXNlcjI2MDA2ODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/26006890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KimYar",
"html_url": "https://github.com/KimYar",
"followers_url": "https://api.github.com/users/KimYar/followers",
"following_url": "https://api.github.com/users/KimYar/following{/other_user}",
"gists_url": "https://api.github.com/users/KimYar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KimYar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KimYar/subscriptions",
"organizations_url": "https://api.github.com/users/KimYar/orgs",
"repos_url": "https://api.github.com/users/KimYar/repos",
"events_url": "https://api.github.com/users/KimYar/events{/privacy}",
"received_events_url": "https://api.github.com/users/KimYar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The error message indicated that you need to first install Rust compiler (https://www.rust-lang.org/tools/install).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: CentOS
- Python version: 3.6.3
- PyTorch version (GPU?):1.6.0
- Tensorflow version (GPU?): tensorflow-gpu 2.3.0
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
@mfuntowicz
@jplu
## To reproduce
Steps to reproduce the behavior:
1. On a CentOS distribution try "pip install transformers" Python 3.6.3
pip install transformers
Looking in links: /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/nix/avx2, /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/nix/generic, /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic
Ignoring pip: markers 'python_version < "3"' don't match your environment
Collecting transformers
Using cached transformers-3.2.0-py3-none-any.whl (1.0 MB)
Collecting tokenizers==0.8.1.rc2
Using cached tokenizers-0.8.1rc2.tar.gz (97 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/dataclasses-0.7-py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/filelock-3.0.12-py3-none-any.whl
Requirement already satisfied: tqdm>=4.27 in /home/-/ENV/lib/python3.6/site-packages (from transformers) (4.49.0)
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/packaging-20.4-py2.py3-none-any.whl
Requirement already satisfied: numpy in /home/-/ENV/lib/python3.6/site-packages (from transformers) (1.19.1)
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/requests-2.24.0-py2.py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/sacremoses-0.0.43-py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/nix/generic/sentencepiece-0.1.90-cp36-cp36m-linux_x86_64.whl
Requirement already satisfied: regex!=2019.12.17 in /home/-/ENV/lib/python3.6/site-packages (from transformers) (2019.11.1)
Requirement already satisfied: six in /home/-/ENV/lib/python3.6/site-packages (from packaging->transformers) (1.15.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/-/ENV/lib/python3.6/site-packages (from packaging->transformers) (2.4.7)
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/certifi-2020.6.20-py2.py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/chardet-3.0.4-py2.py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/idna-2.10-py2.py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/urllib3-1.25.10-py2.py3-none-any.whl
Requirement already satisfied: joblib in /home/-/ENV/lib/python3.6/site-packages (from sacremoses->transformers) (0.16.0)
Requirement already satisfied: click in /home/-/ENV/lib/python3.6/site-packages (from sacremoses->transformers) (7.1.2)
Building wheels for collected packages: tokenizers
Building wheel for tokenizers (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /home/--/ENV/bin/python /home/--/ENV/lib/python3.6/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmp_ut164h5
cwd: /tmp/pip-install-7krg2wb2/tokenizers
Complete output (38 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/tokenizers
copying tokenizers/__init__.py -> build/lib/tokenizers
creating build/lib/tokenizers/models
copying tokenizers/models/__init__.py -> build/lib/tokenizers/models
creating build/lib/tokenizers/decoders
copying tokenizers/decoders/__init__.py -> build/lib/tokenizers/decoders
creating build/lib/tokenizers/normalizers
copying tokenizers/normalizers/__init__.py -> build/lib/tokenizers/normalizers
creating build/lib/tokenizers/pre_tokenizers
copying tokenizers/pre_tokenizers/__init__.py -> build/lib/tokenizers/pre_tokenizers
creating build/lib/tokenizers/processors
copying tokenizers/processors/__init__.py -> build/lib/tokenizers/processors
creating build/lib/tokenizers/trainers
copying tokenizers/trainers/__init__.py -> build/lib/tokenizers/trainers
creating build/lib/tokenizers/implementations
copying tokenizers/implementations/__init__.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/base_tokenizer.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/bert_wordpiece.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/byte_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/char_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/__init__.pyi -> build/lib/tokenizers
copying tokenizers/models/__init__.pyi -> build/lib/tokenizers/models
copying tokenizers/decoders/__init__.pyi -> build/lib/tokenizers/decoders
copying tokenizers/normalizers/__init__.pyi -> build/lib/tokenizers/normalizers
copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib/tokenizers/pre_tokenizers
copying tokenizers/processors/__init__.pyi -> build/lib/tokenizers/processors
copying tokenizers/trainers/__init__.pyi -> build/lib/tokenizers/trainers
running build_ext
running build_rust
/tmp/pip-build-env-1p_8fw9e/overlay/lib/python3.6/site-packages/setuptools/dist.py:452: UserWarning: Normalizing '0.8.1.rc2' to '0.8.1rc2'
warnings.warn(tmpl.format(**locals()))
error: Can not find Rust compiler
----------------------------------------
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7342/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7341 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7341/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7341/comments | https://api.github.com/repos/huggingface/transformers/issues/7341/events | https://github.com/huggingface/transformers/issues/7341 | 707,420,180 | MDU6SXNzdWU3MDc0MjAxODA= | 7,341 | data_collator.py - line 326, in mask tokens - xlnet finetuning error | {
"login": "GenTxt",
"id": 22547261,
"node_id": "MDQ6VXNlcjIyNTQ3MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/22547261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GenTxt",
"html_url": "https://github.com/GenTxt",
"followers_url": "https://api.github.com/users/GenTxt/followers",
"following_url": "https://api.github.com/users/GenTxt/following{/other_user}",
"gists_url": "https://api.github.com/users/GenTxt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GenTxt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GenTxt/subscriptions",
"organizations_url": "https://api.github.com/users/GenTxt/orgs",
"repos_url": "https://api.github.com/users/GenTxt/repos",
"events_url": "https://api.github.com/users/GenTxt/events{/privacy}",
"received_events_url": "https://api.github.com/users/GenTxt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@sgugger might be interested in this issue as well.",
"I have the same issue. \r\nDoes anybody know any \"workarounds\" to bypass this issue? ",
"@GenTxt did you find any workaround for this error ?",
"No, unfortuantely. Was hoping others more familiar with the problem would\noffer solutions.\n\nOn Wed, Oct 7, 2020 at 8:46 AM Mihai Dobri <[email protected]> wrote:\n\n> @GenTxt <https://github.com/GenTxt> did you find any workaround for this\n> error ?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7341#issuecomment-704910726>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AFMAWPJSWOH7MWHGL52UXATSJRPJHANCNFSM4RXDWVGA>\n> .\n>\n",
"@LysandreJik or @sgugger I am wondering if you could please let us know if is a workaround for this issue ? or if a code fix is planned in the near future?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Solved by using TPU instructions on GPU:\r\n\r\n**Note:** On TPU , you should use the flag `--pad_to_max_length` in conjunction with the `--line_by_line` flag to make\r\nsure all your batches have the same length.\r\n\r\nWorks now.\r\n\r\nEncountered similar issue with fine-tuning Bert. \r\n\r\nSolved by using:\r\n\r\n --max_seq_length=512 with --line_by_line",
"How to solve the proble\r\n\r\nile \"/anaconda3/envs/pytorch-gpu/lib/python3.6/site-packages/transformers/data/data_collator.py\", line 615, in mask_tokens\r\n \"This collator requires that sequence lengths be even to create a leakage-free perm_mask. Please see relevant comments in source code for details.\"\r\nValueError: This collator requires that sequence lengths be even to create a leakage-free perm_mask. Please see relevant comments in source code for details.\r\n 0%| ",
"I ended up adding <pad> if token length is not even. Is this ok?",
"I need to hook in to this issue as I can't find a solution for this simple problem:\r\n\r\nI have a dataset that I \"stream\", meaning that during `__getitem__`, I read the line from file/memory, encode and return it. \r\n\r\nAs I am pre-training an `XLNetLMHeadModel`, I need the `DataCollatorForPermutationLanguageModeling `which throws the error \r\n\r\n \"This collator requires that sequence lengths be even to create a leakage-free perm_mask. Please see\"\r\n \" relevant comments in source code for details.\"\r\n\r\nas in in its function [torch_mask_tokens](https://huggingface.co/docs/transformers/v4.33.0/en/main_classes/data_collator#transformers.DataCollatorForLanguageModeling.torch_mask_tokens) no padding is happening. \r\n\r\nWhy is that? Can we somehow combine the `DataCollatorWithPadding ` with the `DataCollatorForPermutationLanguageModeling` in the [trainer](https://huggingface.co/docs/transformers/main_classes/trainer) class? Or is there any other clever solution for such a \"stream\"-like dataset?",
"Addendum: I think, a quick solution is to pad to `tokenizer.model_max_length` in the [_torch_collate_batch](https://github.com/huggingface/transformers/blob/v4.33.0/src/transformers/data/data_collator.py#L426) function.\r\n\r\nIt's not that _no padding_ is happening, but only to the max sequence length of the current batch which might happend to be uneven. As the `model_max_length `is usually an even number, this solves the problem."
] | 1,600 | 1,694 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: version: 3.2.0
- Platform: Linux-4.15.0-118-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
--> TransfoXL/XLNet: @TevenLeScao
## Information
Model I am using (XLNet):
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Testing simple example in 'language-modeling/examples/README' using recommended wiki-2-raw dataset and xlnet-base cased model
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
Same error occurs using simple one sentence per line text file (10 megs)
## To reproduce
Steps to reproduce the behavior:
1. Run all steps in 'language-modeling/examples/README' using xlnet-base-cased (cached or local)
2. Model loads with warnings and process begins before quickly exiting with the following error:
File "/home/pixelhead/Desktop/xlnet/transformers-master/transformers/data/data_collator.py", line 326, in mask_tokens
"This collator requires that sequence lengths be even to create a leakage-free perm_mask. Please see relevant comments in source code for details."
ValueError: This collator requires that sequence lengths be even to create a leakage-free perm_mask. Please see relevant comments in source code for details.
Epoch: 0%| | 0/3 [00:00<?, ?it/s]
Iteration: 0%|
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expect 'run_language_modeling.py' to work for xlnet as per 'language-modeling/examples/README'
Have tested addition of '--line_by_line' and 'block_size=128, 256, 512' etc. Same error.
Could be missing something here 'Please see relevant comments in source code for details.' but not clear.
Cheers, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7341/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7340 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7340/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7340/comments | https://api.github.com/repos/huggingface/transformers/issues/7340/events | https://github.com/huggingface/transformers/pull/7340 | 707,403,903 | MDExOlB1bGxSZXF1ZXN0NDkxODEyNjQz | 7,340 | Fixed evaluation_strategy on epoch end bug | {
"login": "WissamAntoun",
"id": 44616226,
"node_id": "MDQ6VXNlcjQ0NjE2MjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/44616226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WissamAntoun",
"html_url": "https://github.com/WissamAntoun",
"followers_url": "https://api.github.com/users/WissamAntoun/followers",
"following_url": "https://api.github.com/users/WissamAntoun/following{/other_user}",
"gists_url": "https://api.github.com/users/WissamAntoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WissamAntoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WissamAntoun/subscriptions",
"organizations_url": "https://api.github.com/users/WissamAntoun/orgs",
"repos_url": "https://api.github.com/users/WissamAntoun/repos",
"events_url": "https://api.github.com/users/WissamAntoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/WissamAntoun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=h1) Report\n> Merging [#7340](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28cf873036d078b47fb9dd38ac3421a7c874da44?el=desc) will **increase** coverage by `2.24%`.\n> The diff coverage is `33.33%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7340 +/- ##\n==========================================\n+ Coverage 76.58% 78.83% +2.24% \n==========================================\n Files 181 181 \n Lines 34828 34828 \n==========================================\n+ Hits 26674 27456 +782 \n+ Misses 8154 7372 -782 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.62% <33.33%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.62% <0.00%> (-69.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `96.76% <0.00%> (+0.20%)` | :arrow_up: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=footer). Last update [28cf873...48c72a9](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks a lot for the fix!"
] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | moved the evaluation script outside the iteration loop
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7339
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7340/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7340",
"html_url": "https://github.com/huggingface/transformers/pull/7340",
"diff_url": "https://github.com/huggingface/transformers/pull/7340.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7340.patch",
"merged_at": 1600881421000
} |
https://api.github.com/repos/huggingface/transformers/issues/7339 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7339/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7339/comments | https://api.github.com/repos/huggingface/transformers/issues/7339/events | https://github.com/huggingface/transformers/issues/7339 | 707,368,925 | MDU6SXNzdWU3MDczNjg5MjU= | 7,339 | Trainer Evaluates at each step (Not of epoch end) , indentation bug | {
"login": "WissamAntoun",
"id": 44616226,
"node_id": "MDQ6VXNlcjQ0NjE2MjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/44616226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WissamAntoun",
"html_url": "https://github.com/WissamAntoun",
"followers_url": "https://api.github.com/users/WissamAntoun/followers",
"following_url": "https://api.github.com/users/WissamAntoun/following{/other_user}",
"gists_url": "https://api.github.com/users/WissamAntoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WissamAntoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WissamAntoun/subscriptions",
"organizations_url": "https://api.github.com/users/WissamAntoun/orgs",
"repos_url": "https://api.github.com/users/WissamAntoun/repos",
"events_url": "https://api.github.com/users/WissamAntoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/WissamAntoun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.2.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: NA
### Who can help
Trainer: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
- [ ] the official example scripts:
- [x] my own modified scripts:
The tasks I am working on is:
- [ ] an official GLUE/SQUaD task:
- [x] my own task or dataset:
Basic Single Sentence Classification Dataset loaded via a Dataset class
## To reproduce
Steps to reproduce the behavior:
1. Use the trainer training function with `training_args.evaluation_strategy = EvaluationStrategy.EPOCH`
## Expected behavior
Evaluation should happen after each epoch ends, but instead it happens after each step (batch).
Indentation bug
## Suggested Fix
Move the if condition on line https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L829 to before line https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L834 and remove 1 indentation block
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7339/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7338/comments | https://api.github.com/repos/huggingface/transformers/issues/7338/events | https://github.com/huggingface/transformers/issues/7338 | 707,331,604 | MDU6SXNzdWU3MDczMzE2MDQ= | 7,338 | BufferedWriter takes most of the time | {
"login": "entron",
"id": 3742499,
"node_id": "MDQ6VXNlcjM3NDI0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3742499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/entron",
"html_url": "https://github.com/entron",
"followers_url": "https://api.github.com/users/entron/followers",
"following_url": "https://api.github.com/users/entron/following{/other_user}",
"gists_url": "https://api.github.com/users/entron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/entron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/entron/subscriptions",
"organizations_url": "https://api.github.com/users/entron/orgs",
"repos_url": "https://api.github.com/users/entron/repos",
"events_url": "https://api.github.com/users/entron/events{/privacy}",
"received_events_url": "https://api.github.com/users/entron/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: macOS-10.14.6-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Speed and Memory Benchmarks: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
english = pipeline(
"question-answering",
model="distilbert-base-uncased-distilled-squad",
tokenizer="distilbert-base-uncased-distilled-squad"
)
text1 = """It comes as pubs, bars, restaurants and other hospitality venues in England are told they must have a 22:00 closing time from Thursday.
Full details will be set out by the prime minister in Parliament later.
Boris Johnson is meeting the first ministers of Scotland, Wales and Northern Ireland and will address the nation in a live broadcast at 20:00 BST on Tuesday.
As well as the early closing time for hospitality venues, he is expected to announce they will be restricted by law to table service only.
"""
%%prun
english({'question': 'Which country is the news about?', 'context': text1})
```
The profiling result is
```
6256 function calls (6155 primitive calls) in 1.097 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.713 0.713 0.713 0.713 {method 'write' of '_io.BufferedWriter' objects}
37 0.229 0.006 0.229 0.006 {method 'matmul' of 'torch._C._TensorBase' objects}
12 0.030 0.002 0.030 0.002 {built-in method matmul}
5 0.020 0.004 0.020 0.004 {method 'dump' of '_pickle.Pickler' objects}
6 0.019 0.003 0.019 0.003 {method 'softmax' of 'torch._C._TensorBase' objects}
33 0.012 0.000 0.012 0.000 {method 'acquire' of '_thread.lock' objects}
3 0.009 0.003 0.009 0.003 {built-in method posix.waitpid}
6 0.009 0.002 0.009 0.002 {method 'masked_fill_' of 'torch._C._TensorBase' objects}
6 0.009 0.001 0.009 0.001 {built-in method torch._C._nn.gelu}
37 0.006 0.000 0.235 0.006 functional.py:1655(linear)
94/1 0.005 0.000 0.325 0.325 module.py:710(_call_impl)
...
```
## Expected behavior
Most time is spent on inference such as the method `matmul`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7338/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7337/comments | https://api.github.com/repos/huggingface/transformers/issues/7337/events | https://github.com/huggingface/transformers/issues/7337 | 707,213,756 | MDU6SXNzdWU3MDcyMTM3NTY= | 7,337 | Trainer.py module 'datasets' has no attribute 'Dataset' | {
"login": "FrancoisMentec",
"id": 22057576,
"node_id": "MDQ6VXNlcjIyMDU3NTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/22057576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancoisMentec",
"html_url": "https://github.com/FrancoisMentec",
"followers_url": "https://api.github.com/users/FrancoisMentec/followers",
"following_url": "https://api.github.com/users/FrancoisMentec/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancoisMentec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrancoisMentec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancoisMentec/subscriptions",
"organizations_url": "https://api.github.com/users/FrancoisMentec/orgs",
"repos_url": "https://api.github.com/users/FrancoisMentec/repos",
"events_url": "https://api.github.com/users/FrancoisMentec/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrancoisMentec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"I'm not sure which `datasets` module is installed in your env, but the Hugging Face `datasets` definitely has a `Dataset` attribute. And no, this part is not using PyTorch `Dataset`.",
"Apparently, the `datasets` module wasn't even installed on my environment. But installing it just replaced the error by another one. It's upgrading PyTorch that fixed the issue, might be cool to be notified during the installation or the execution of transformers that we don't have the required PyTorch version for it to works. It feels awkward having to fixe dependencies myself, that's what a package manager like pip is usually used for. Maybe it's better handled by anaconda...",
"> Apparently, the `datasets` module wasn't even installed on my environment. But installing it just replaced the error by another one. It's upgrading PyTorch that fixed the issue, might be cool to be notified during the installation or the execution of transformers that we don't have the required PyTorch version for it to works. It feels awkward having to fixe dependencies myself, that's what a package manager like pip is usually used for. Maybe it's better handled by anaconda...\r\n\r\nhi, could you please tell me which pytorch version you have been upgraded to solve this problem? I got the same problem.",
"@ericdoug-qi Can't remember, did you check you are using the latest version of PyTorch and Transformers? Otherwise, try anaconda.",
"may be you use the wrong moudule, try to run conmand \"pip uninstall datasets\" , then it can use Hugging Face datasets"
] | 1,600 | 1,688 | 1,600 | NONE | null | I'm trying to use a Trainer, but I get this error:
```
c:\users\francois\appdata\local\programs\python\python37\lib\site-packages\transformers\trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, tb_writer, optimizers, **kwargs)
287
288 if is_datasets_available():
--> 289 if isinstance(train_dataset, datasets.Dataset):
290 self._remove_unused_columns(self.train_dataset, description="training")
291 if isinstance(eval_dataset, datasets.Dataset):
AttributeError: module 'datasets' has no attribute 'Dataset'
```
My guess is that `datasets.Dataset` should be replaced by `torch.utils.data.Dataset` but I haven't checked the source file. Maybe the person responsible for the `Trainer` development should look into that.
I'm using transformers version 3.2.0 btw. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7337/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7336/comments | https://api.github.com/repos/huggingface/transformers/issues/7336/events | https://github.com/huggingface/transformers/issues/7336 | 707,150,342 | MDU6SXNzdWU3MDcxNTAzNDI= | 7,336 | Error when fine-tune RoBERTa on NSP using Trainer | {
"login": "adamwawrzynski",
"id": 19324675,
"node_id": "MDQ6VXNlcjE5MzI0Njc1",
"avatar_url": "https://avatars.githubusercontent.com/u/19324675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adamwawrzynski",
"html_url": "https://github.com/adamwawrzynski",
"followers_url": "https://api.github.com/users/adamwawrzynski/followers",
"following_url": "https://api.github.com/users/adamwawrzynski/following{/other_user}",
"gists_url": "https://api.github.com/users/adamwawrzynski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adamwawrzynski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamwawrzynski/subscriptions",
"organizations_url": "https://api.github.com/users/adamwawrzynski/orgs",
"repos_url": "https://api.github.com/users/adamwawrzynski/repos",
"events_url": "https://api.github.com/users/adamwawrzynski/events{/privacy}",
"received_events_url": "https://api.github.com/users/adamwawrzynski/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @adamwawrzynski,\r\n\r\nCould you create a google-colab where we can reproduce the error? It is quite difficult to reproduce your error since it seems to be very specific to your usecase.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @adamwawrzynski, @patrickvonplaten,\r\n\r\nThe issue is caused by `random_start = random.randint(0, len(random_document) - 1)` having a zero length `random_document`.\r\n\r\nJust run the following and you will get the same error:\r\n```\r\nimport random\r\nrandom.randint(0, 0 - 1)\r\n```\r\n\r\nThe zero length `random_document` can occur if for example the data file's last line is an empty line.\r\n\r\nYou can solve for this either by ensuring that there is no empty line at the end of the data file and/or by monkey patching the `TextDatasetForNextSentencePrediction.__init__()` method (https://github.com/huggingface/transformers/blob/master/src/transformers/data/datasets/language_modeling.py#L353), by adding a line like this:\r\n```\r\nself.documents = [d for d in self.documents if len(d)]\r\n```\r\n\r\nNOTE: Obviously because this failure pops up when random documents are picked, this error will NOT come up at every run due to this randomness, so if you want to reproduce, you might need to run your example multiple times.\r\n"
] | 1,600 | 1,643 | 1,607 | NONE | null | ## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-5.4.0-45-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: IDK.
- Using distributed or parallel set-up in script?: IDK.
### Who can help
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
nlp datasets: [different repo](https://github.com/huggingface/nlp)
## Information
Model I am using RoBERTa trained for Polish lanugage: [polish-roberta](https://github.com/sdadas/polish-roberta), version [robreta_base_transformers](https://github.com/sdadas/polish-roberta/releases/download/models/roberta_base_transformers.zip).
The problem arises when using:
* [ ] my own modified scripts:
```python
from transformers import (BertForNextSentencePrediction,
BertTokenizer,
RobertaModel, RobertaTokenizer, Trainer,
TrainingArguments)
from transformers.data.datasets.language_modeling import TextDatasetForNextSentencePrediction
from transformers.data.data_collator import DataCollatorForNextSentencePrediction
from argparse import ArgumentParser
def parse_args():
parser = ArgumentParser("Fine-tune RoBERTa in Next Sentence Prediction.")
parser.add_argument("-m", "--model_path", dest="model_path", required=True, help="Path to RoBERTa model.")
parser.add_argument("-o", "--output_path", dest="output_path", required=True, help="Path to directory of fine-tuned model.")
parser.add_argument("-d", "--dataset_path", dest="dataset_path", required=True, help="Path to dataset.")
args = parser.parse_args()
return args
if __name__ == "__main__":
args = parse_args()
tokenizer = RobertaTokenizer.from_pretrained(args.model_path)
finetune_model = BertForNextSentencePrediction.from_pretrained(args.model_path)
training_args = TrainingArguments(
output_dir=args.output_path,
num_train_epochs=3,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
)
data_collator = DataCollatorForNextSentencePrediction(
tokenizer=tokenizer,
mlm=False,
block_size=512,
nsp_probability=0.5,
)
train_dataset = TextDatasetForNextSentencePrediction(
tokenizer=tokenizer,
file_path=args.dataset_path,
block_size=512,
)
trainer = Trainer(
model=finetune_model,
args=training_args,
train_dataset=train_dataset,
data_collator=data_collator,
)
trainer.train()
trainer.save_model(args.output_path)
```
The tasks I am working on is:
* [ ] my own task or dataset based on TextDatasetForNextSentencePrediction input format:
```bash
<doc1_turn1>
<doc1_turn2>
<doc2_turn1>
<doc2_turn2>
...
```
## To reproduce
Steps to reproduce the behavior:
1. `python finetune_roberta.py -m <model_dir> -o output/ -d <dataset_path>`
```bash
Special tokens have been added in the vocabulary, make sure the associated word emebedding are fine-tuned or trained.
Some weights of the model checkpoint at roberta_base/ were not used when initializing BertForNextSentencePrediction: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias']
- This IS expected if you are initializing BertForNextSentencePrediction from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BertForNextSentencePrediction from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForNextSentencePrediction were not initialized from the model checkpoint at roberta_base/ and are newly initialized: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Epoch: 0%| | 0/3 [00:00<?, ?it/s/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
{'loss': 0.676176025390625, 'learning_rate': 5e-05, 'epoch': 0.3427004797806717, 'step': 500} | 499/1459 [04:30<08:09, 1.96it/s]
/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.
warnings.warn(SAVE_STATE_WARNING, UserWarning)
{'loss': 0.671025390625, 'learning_rate': 4.355171524374517e-05, 'epoch': 0.6854009595613434, 'step': 1000}███████████▎ | 999/1459 [08:47<03:53, 1.97it/s]
Traceback (most recent call last):███████████████████████████████████████████████████████████████████████████████████████ | 1033/1459 [09:06<03:38, 1.95it/s]
File "finetune_roberta.py", line 75, in <module>
trainer.train()
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/transformers/trainer.py", line 699, in train
for step, inputs in enumerate(epoch_iterator):
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/transformers/data/data_collator.py", line 358, in __call__
input_id, segment_id, attention_mask, label = self.create_examples_from_document(doc, i, examples)
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/transformers/data/data_collator.py", line 446, in create_examples_from_document
random_start = random.randint(0, len(random_document) - 1)
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/random.py", line 248, in randint
return self.randrange(a, b+1)
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/random.py", line 226, in randrange
raise ValueError("empty range for randrange() (%d, %d, %d)" % (istart, istop, width))
ValueError: empty range for randrange() (0, 0, 0)
Epoch: 0%| | 0/3 [09:09<?, ?it/s]
Iteration: 71%|████████████████████████████████████████████████████████████████████████████████████████████████████████ | 1033/1459 [09:09<03:46, 1.88it/s]
```
## Expected behavior
Model is fine-tuned on NSP taks on given dataset and after that model is saved.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7336/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7335/comments | https://api.github.com/repos/huggingface/transformers/issues/7335/events | https://github.com/huggingface/transformers/issues/7335 | 706,974,441 | MDU6SXNzdWU3MDY5NzQ0NDE= | 7,335 | is there a tokenizer only used whitespace for spliting chinese sentence? | {
"login": "lw00245",
"id": 24726347,
"node_id": "MDQ6VXNlcjI0NzI2MzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/24726347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lw00245",
"html_url": "https://github.com/lw00245",
"followers_url": "https://api.github.com/users/lw00245/followers",
"following_url": "https://api.github.com/users/lw00245/following{/other_user}",
"gists_url": "https://api.github.com/users/lw00245/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lw00245/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lw00245/subscriptions",
"organizations_url": "https://api.github.com/users/lw00245/orgs",
"repos_url": "https://api.github.com/users/lw00245/repos",
"events_url": "https://api.github.com/users/lw00245/events{/privacy}",
"received_events_url": "https://api.github.com/users/lw00245/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,606 | 1,606 | NONE | null | i want to use bert masked language model to pre train for chinese sentence, i have alweady splited chinese sentence into meaninful word, the data file like follows:
我 是 一个 队员
他 不是 一个 合格 的 老师
......
i only want to use whitespace split them, but BertWordPieceTokenizer will split them to character level , the final vocabulary like follows:
{'[SEP]': 3,
'一': 7,
'是': 15,
'员': 12,
'[CLS]': 2,
'[UNK]': 1,
'</S>': 6,
'[MASK]': 4,
'<S>': 5,
'队': 19,
'的': 17,
'不': 8,
'我': 14,
'他': 10,
'老': 18,
'[PAD]': 0,
'格': 16,
'个': 9,
'师': 13,
'合': 11}
how to correct it ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7335/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7335/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7334/comments | https://api.github.com/repos/huggingface/transformers/issues/7334/events | https://github.com/huggingface/transformers/pull/7334 | 706,824,218 | MDExOlB1bGxSZXF1ZXN0NDkxMzA5NDk1 | 7,334 | [testing] skip decorators: docs, tests, bugs | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=h1) Report\n> Merging [#7334](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/25b0463d0ba3fcbcf7fff8aa4027a2d8e959364b?el=desc) will **decrease** coverage by `3.75%`.\n> The diff coverage is `43.75%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7334 +/- ##\n==========================================\n- Coverage 80.48% 76.73% -3.76% \n==========================================\n Files 181 181 \n Lines 34827 34827 \n==========================================\n- Hits 28032 26724 -1308 \n- Misses 6795 8103 +1308 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `67.28% <43.75%> (-1.24%)` | :arrow_down: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `19.02% <0.00%> (-74.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=footer). Last update [25b0463...c17c310](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | This PR:
* fixes a bug in `require_torch_and_cuda`
* makes all skip decorators consistent code-wise
* adds a test for testing combinations of skip decorators and other decorators
* clarifies `testing.rst` notes
OK, so other than a small bug in `require_torch_and_cuda` our skip decorators can be used in any order.
The only problem I found so far is when they are used together with `@parameterized`, which has to come first and skip decorators last. It rewrites test names, to create a unique test name for each parameter group. and then it runs them - it has no idea it may have any skip decorators before it (The decorators get all stacked, and one below has no idea what the one above does).
If you find other unusual decorators, please let me know and I will investigate.
<!-- This line specifies which issue to close after the pull request is merged. -->
Partially fixes #7326
@LysandreJik, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7334/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7334",
"html_url": "https://github.com/huggingface/transformers/pull/7334",
"diff_url": "https://github.com/huggingface/transformers/pull/7334.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7334.patch",
"merged_at": 1600852579000
} |
https://api.github.com/repos/huggingface/transformers/issues/7333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7333/comments | https://api.github.com/repos/huggingface/transformers/issues/7333/events | https://github.com/huggingface/transformers/issues/7333 | 706,764,619 | MDU6SXNzdWU3MDY3NjQ2MTk= | 7,333 | Cannot import transformers with TF version 2.1.0 | {
"login": "amogkam",
"id": 8068268,
"node_id": "MDQ6VXNlcjgwNjgyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8068268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amogkam",
"html_url": "https://github.com/amogkam",
"followers_url": "https://api.github.com/users/amogkam/followers",
"following_url": "https://api.github.com/users/amogkam/following{/other_user}",
"gists_url": "https://api.github.com/users/amogkam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amogkam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amogkam/subscriptions",
"organizations_url": "https://api.github.com/users/amogkam/orgs",
"repos_url": "https://api.github.com/users/amogkam/repos",
"events_url": "https://api.github.com/users/amogkam/events{/privacy}",
"received_events_url": "https://api.github.com/users/amogkam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello !\r\n\r\nIndeed, the requirements have to be updated.",
"Has the problem been solved? I met the same issue when loading the transformers.",
"Hello, you have to have TF 2.3 at min. This will be fixed in the next release.",
"This breaks at least a couple of the tutorial notebooks. Even with TF 2.3.0 I get the same error.",
"If you get this message error:\r\n```\r\nAttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish'\r\n```\r\nIt means you don't have at least TF 2.2 installed.",
"The problem can be seen as Transformers uses the tf _swish activation function_ by default (that does not exists in tf 2.1: https://www.tensorflow.org/versions/r2.1/api_docs/python/tf/keras/activations).\r\n\r\nA workaround, instead of upgrading tf to 2.2 (unavailable at this time with `conda`), is to downgrade Transformers to a version that was developed with tf 2.1. \r\n\r\nFor example, I had this warning with TF 2.1.0 and transformers 3.5.1 and it desappeared with transformers 3.0.2 [Warning : this version installs a specific version of pytorch so it is better to use a virtual environment].",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,600 | 1,619 | 1,619 | COLLABORATOR | null | The installation README says that transformers library requires Tensorflow version >2.0, but I can't seem to import the latest transformers 3.2 release even with Tensorflow v2.1.
```
>>> import transformers
wandb: WARNING W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/amog/dev/ray/lib/python3.7/site-packages/transformers/__init__.py", line 121, in <module>
from .pipelines import (
File "/Users/amog/dev/ray/lib/python3.7/site-packages/transformers/pipelines.py", line 47, in <module>
from .modeling_tf_auto import (
File "/Users/amog/dev/ray/lib/python3.7/site-packages/transformers/modeling_tf_auto.py", line 45, in <module>
from .modeling_tf_albert import (
File "/Users/amog/dev/ray/lib/python3.7/site-packages/transformers/modeling_tf_albert.py", line 24, in <module>
from .activations_tf import get_tf_activation
File "/Users/amog/dev/ray/lib/python3.7/site-packages/transformers/activations_tf.py", line 53, in <module>
"swish": tf.keras.activations.swish,
AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish'
```
Upgrading to TF 2.2 works fine, but I think this should be made more clear in the docs.
cc @jplu @sgugger
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Mac OS
- Python version: 3.7.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.1.0. On CPU only.
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7333/reactions",
"total_count": 16,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7333/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7332 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7332/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7332/comments | https://api.github.com/repos/huggingface/transformers/issues/7332/events | https://github.com/huggingface/transformers/issues/7332 | 706,742,316 | MDU6SXNzdWU3MDY3NDIzMTY= | 7,332 | data_collator error: AttributeError: 'dict' object has no attribute 'size' | {
"login": "gungor2",
"id": 22436319,
"node_id": "MDQ6VXNlcjIyNDM2MzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22436319?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gungor2",
"html_url": "https://github.com/gungor2",
"followers_url": "https://api.github.com/users/gungor2/followers",
"following_url": "https://api.github.com/users/gungor2/following{/other_user}",
"gists_url": "https://api.github.com/users/gungor2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gungor2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gungor2/subscriptions",
"organizations_url": "https://api.github.com/users/gungor2/orgs",
"repos_url": "https://api.github.com/users/gungor2/repos",
"events_url": "https://api.github.com/users/gungor2/events{/privacy}",
"received_events_url": "https://api.github.com/users/gungor2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It looks like you're not using the latest version of transformers (from the stack trace). This bug as been fixed, so you shouldn't have the problems with transformers 3.1.1.\r\nIn general, when reporting a bug/asking a question, make sure you include your version of transformers so we can help more efficiently. You can get it by running the command `transformers-cli env` and pasting the results.",
"I will add the version from now on. Your suggestion worked, thanks a lot!",
"i use the transformer 3.4.0, and met the same error",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,610 | 1,610 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**:
I am trying to run a language model that is very similar to the [tutorial ](https://huggingface.co/blog/how-to-train). I have a custom dataset class that returns a dict with fields: dict_keys(['input_ids', 'token_type_ids', 'attention_mask']). When I run the training I get this error message:
``` File "prod2vec/train-from-scratch.py", line 289, in <module>
sys.exit(main())
File "prod2vec/train-from-scratch.py", line 265, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/site-packages/transformers/trainer.py", line 456, in train
for step, inputs in enumerate(epoch_iterator):
File "/usr/local/lib/python3.6/site-packages/tqdm/std.py", line 1127, in __iter__
for obj in iterable:
File "/usr/local/lib64/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/usr/local/lib64/python3.6/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib64/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 35, in fetch
return self.collate_fn(data)
File "/usr/local/lib/python3.6/site-packages/transformers/data/data_collator.py", line 79, in __call__
batch = self._tensorize_batch(examples)
File "/usr/local/lib/python3.6/site-packages/transformers/data/data_collator.py", line 91, in _tensorize_batch
length_of_first = examples[0].size(0)
AttributeError: 'dict' object has no attribute 'size'
```
The error message is not surprising as examples[0] is a dictionary with previously mentioned three fields. I am curious what and where i am doing a mistake.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7332/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7331 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7331/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7331/comments | https://api.github.com/repos/huggingface/transformers/issues/7331/events | https://github.com/huggingface/transformers/pull/7331 | 706,727,360 | MDExOlB1bGxSZXF1ZXN0NDkxMjIzNDIw | 7,331 | [s2s] only save metrics.json from rank zero | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7331/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7331",
"html_url": "https://github.com/huggingface/transformers/pull/7331",
"diff_url": "https://github.com/huggingface/transformers/pull/7331.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7331.patch",
"merged_at": 1600813649000
} |
https://api.github.com/repos/huggingface/transformers/issues/7330 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7330/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7330/comments | https://api.github.com/repos/huggingface/transformers/issues/7330/events | https://github.com/huggingface/transformers/pull/7330 | 706,721,191 | MDExOlB1bGxSZXF1ZXN0NDkxMjE4NDQz | 7,330 | Ensure that integrations are imported before transformers or ml libs | {
"login": "dsblank",
"id": 168568,
"node_id": "MDQ6VXNlcjE2ODU2OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/168568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsblank",
"html_url": "https://github.com/dsblank",
"followers_url": "https://api.github.com/users/dsblank/followers",
"following_url": "https://api.github.com/users/dsblank/following{/other_user}",
"gists_url": "https://api.github.com/users/dsblank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsblank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsblank/subscriptions",
"organizations_url": "https://api.github.com/users/dsblank/orgs",
"repos_url": "https://api.github.com/users/dsblank/repos",
"events_url": "https://api.github.com/users/dsblank/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsblank/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=h1) Report\n> Merging [#7330](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f5518e56318a79056ba3c80cbece29d9fe98558c?el=desc) will **decrease** coverage by `0.47%`.\n> The diff coverage is `88.88%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7330 +/- ##\n==========================================\n- Coverage 79.30% 78.83% -0.48% \n==========================================\n Files 181 181 \n Lines 34828 34828 \n==========================================\n- Hits 27620 27456 -164 \n- Misses 7208 7372 +164 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/integrations.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9pbnRlZ3JhdGlvbnMucHk=) | `29.00% <87.50%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.38% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.62% <0.00%> (-69.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=footer). Last update [f5518e5...0554448](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | This PR fixes a problem with some 3rd-party integrations that need to be imported before any transformers or other machine learning framework Python modules.
This PR makes the following changes:
1. Moves `import .integrations` in `__init__.py` before any other transformers imports
2. Moves ML imports in .integrations below 3rd-party imports
3. Used math.ceil() rather than numpy.ceil() as that was overkill
Before PR:
* failed with comet_ml
After PR:
* works with comet_ml | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7330/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7330",
"html_url": "https://github.com/huggingface/transformers/pull/7330",
"diff_url": "https://github.com/huggingface/transformers/pull/7330.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7330.patch",
"merged_at": 1600881826000
} |
https://api.github.com/repos/huggingface/transformers/issues/7329 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7329/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7329/comments | https://api.github.com/repos/huggingface/transformers/issues/7329/events | https://github.com/huggingface/transformers/issues/7329 | 706,700,603 | MDU6SXNzdWU3MDY3MDA2MDM= | 7,329 | Problem loading a dynamic quantized distilbert model. | {
"login": "MarwenJ12",
"id": 71731530,
"node_id": "MDQ6VXNlcjcxNzMxNTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/71731530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarwenJ12",
"html_url": "https://github.com/MarwenJ12",
"followers_url": "https://api.github.com/users/MarwenJ12/followers",
"following_url": "https://api.github.com/users/MarwenJ12/following{/other_user}",
"gists_url": "https://api.github.com/users/MarwenJ12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarwenJ12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarwenJ12/subscriptions",
"organizations_url": "https://api.github.com/users/MarwenJ12/orgs",
"repos_url": "https://api.github.com/users/MarwenJ12/repos",
"events_url": "https://api.github.com/users/MarwenJ12/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarwenJ12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I got the same issue here, it would be great to know why",
"You are trying to load into a not-quantized module (ModelForTokenClassification) some quantized weights (`quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)`)\r\nYou should make sure first that the instance you are loading into is actually a quantized model.",
"Thanks for your response. So if I understood correctly, I have to write the code to load the quantized model ? something similar to DistilBertForTokenClassification ?",
"any updates on thhis?",
"It is a matter of adding a few lines:\r\n\r\n```python\r\n# Transform your model into a quantized model\r\nquantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)\r\n# Load the quantized weights into the quantized model (module in torch)\r\nquantized_model.load_state_dict(torch.load(YOUR_PATH_TO_THE_QUANTIZED_WEIGHTS))\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,611 | 1,611 | NONE | null | Hello and thanks for your awesome library,
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-117-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@VictorSanh
@stefan-it
## Information
I'm trying to optimize a fine-tuned (for token classification, NER) distilBert model through Dynamic Quantization.
I use this line:
```python
quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
```
The model size goes from: 540 MB to 411 MB.
The quantized model works fine when I use it straight away in the script to make predictions, however I'm having trouble saving it and reloading it.
I tried few things, first using save_pretrained:
```python
quantized_model.save_pretrained(quantized_output_dir)
```
And then loading it using:
```python
model = AutoModelForTokenClassification.from_pretrained(quantized_output_dir)
```
When I use it to make predictions, I get the warning:
``
Some weights of the model checkpoint at data/model3/quantized3/ were not used when initializing DistilBertForTokenClassification: ['distilbert.transformer.layer.0.attention.q_lin.scale', 'distilbert.transformer.layer.0.attention.q_lin.zero_point', 'distilbert.transformer.layer.0.attention.q_lin._packed_params.dtype', 'distilbert.transformer.layer.0.attention.q_lin._packed_params._packed_params', 'distilbert.transformer.layer.0.attention.k_lin.scale', 'distilbert.transformer.layer.0.attention.k_lin.zero_point', 'distilbert.transformer.layer.0.attention.k_lin._packed_params.dtype', 'distilbert.transformer.layer.0.attention.k_lin._packed_params._packed_params', 'distilbert.transformer.layer.0.attention.v_lin.scale', 'distilbert.transformer.layer.0.attention.v_lin.zero_point', 'distilbert.transformer.layer.0.attention.v_lin._packed_params.dtype', 'distilbert.transformer.layer.0.attention.v_lin._packed_params._packed_params', 'distilbert.transformer.layer.0.attention.out_lin.scale', 'distilbert.transformer.layer.0.attention.out_lin.zero_point', 'distilbert.transformer.layer.0.attention.out_lin._packed_params.dtype', 'distilbert.transformer.layer.0.attention.out_lin._packed_params._packed_params', 'distilbert.transformer.layer.0.ffn.lin1.scale', 'distilbert.transformer.layer.0.ffn.lin1.zero_point', 'distilbert.transformer.layer.0.ffn.lin1._packed_params.dtype', 'distilbert.transformer.layer.0.ffn.lin1._packed_params._packed_params', 'distilbert.transformer.layer.0.ffn.lin2.scale', 'distilbert.transformer.layer.0.ffn.lin2.zero_point', 'distilbert.transformer.layer.0.ffn.lin2._packed_params.dtype', 'distilbert.transformer.layer.0.ffn.lin2._packed_params._packed_params', 'distilbert.transformer.layer.1.attention.q_lin.scale',
``
For all the layers.
And of course I got wrong predictions because it's as if the model isn't fine-tuned.
I tried saving it using:
```python
torch.save(quantized_model.state_dict(), path)
```
loading it using:
```python
config = DistilBertConfig.from_pretrained("distilbert-base-multilingual-cased", num_labels=5)
model = DistilBertForTokenClassification.from_pretrained("distilbert-base-multilingual-cased", config=config)
model.load_state_dict(torch.load(path))
```
and I got this runtime error:
``
RuntimeError: Error(s) in loading state_dict for DistilBertForTokenClassification:
Missing key(s) in state_dict: "distilbert.transformer.layer.0.attention.q_lin.weight", "distilbert.transformer.layer.0.attention.q_lin.bias", "distilbert.transformer.layer.0.attention.k_lin.weight", "distilbert.transformer.layer.0.attention.k_lin.bias", "distilbert.transformer.layer.0.attention.v_lin.weight", "distilbert.transformer.layer.0.attention.v_lin.bias", "distilbert.transformer.layer.0.attention.out_lin.weight", "distilbert.transformer.layer.0.attention.out_lin.bias", "distilbert.transformer.layer.0.ffn.lin1.weight", "distilbert.transformer.layer.0.ffn.lin1.bias", "distilbert.transformer.layer.0.ffn.lin2.weight", "distilbert.transformer.layer.0.ffn.lin2.bias", "distilbert.transformer.layer.1.attention.q_lin.weight",
Unexpected key(s) in state_dict: "distilbert.transformer.layer.0.attention.q_lin.scale", "distilbert.transformer.layer.0.attention.q_lin.zero_point", "distilbert.transformer.layer.0.attention.q_lin._packed_params.dtype", "distilbert.transformer.layer.0.attention.q_lin._packed_params._packed_params", "distilbert.transformer.layer.0.attention.k_lin.scale", "distilbert.transformer.layer.0.attention.k_lin.zero_point", "distilbert.transformer.layer.0.attention.k_lin._packed_params.dtype", "distilbert.transformer.layer.0.attention.k_lin._packed_params._packed_params", "distilbert.transformer.layer.0.attention.v_lin.scale", "distilbert.transformer.layer.0.attention.v_lin.zero_point", "distilbert.transformer.layer.0.attention.v_lin._packed_params.dtype", "distilbert.transformer.layer.0.attention.v_lin._packed_params._packed_params", "distilbert.transformer.layer.0.attention.out_lin.scale", "distilbert.transformer.layer.0.attention.out_lin.zero_point", "distilbert.transformer.layer.0.attention.out_lin._packed_params.dtype", "distilbert.transformer.layer.0.attention.out_lin._packed_params._packed_params", "distilbert.transformer.layer.0.ffn.lin1.scale", "distilbert.transformer.layer.0.ffn.lin1.zero_point", "distilbert.transformer.layer.0.ffn.lin1._packed_params.dtype", "distilbert.transformer.layer.0.ffn.lin1._packed_params._packed_params", "distilbert.transformer.layer.0.ffn.lin2.scale", "distilbert.transformer.layer.0.ffn.lin2.zero_point", "distilbert.transformer.layer.0.ffn.lin2._packed_params.dtype", "distilbert.transformer.layer.0.ffn.lin2._packed_params._packed_params", "distilbert.transformer.layer.1.attention.q_lin.scale", "classifier._packed_params.dtype", "classifier._packed_params._packed_params".
``
For all the layers also (didn't put it all to shorten the text).
Here is the text when printing the quantized model:
```JSON
DistilBertForTokenClassification(
(distilbert): DistilBertModel(
(embeddings): Embeddings(
(word_embeddings): Embedding(119547, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(transformer): Transformer(
(layer): ModuleList(
(0): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(1): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(2): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(3): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(4): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(5): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
)
)
)
(dropout): Dropout(p=0.1, inplace=False)
(classifier): DynamicQuantizedLinear(in_features=768, out_features=5, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
```
## Expected behavior
You can successfully load the quantized fine-tuned model to make predictions.
Can be the "DynamicQuantizedLinear" instead of "Linear" be causing this problem ?
Thanks in advance for your help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7329/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7328 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7328/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7328/comments | https://api.github.com/repos/huggingface/transformers/issues/7328/events | https://github.com/huggingface/transformers/issues/7328 | 706,674,197 | MDU6SXNzdWU3MDY2NzQxOTc= | 7,328 | Add PRADO model | {
"login": "ierezell",
"id": 30974685,
"node_id": "MDQ6VXNlcjMwOTc0Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ierezell",
"html_url": "https://github.com/ierezell",
"followers_url": "https://api.github.com/users/ierezell/followers",
"following_url": "https://api.github.com/users/ierezell/following{/other_user}",
"gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ierezell/subscriptions",
"organizations_url": "https://api.github.com/users/ierezell/orgs",
"repos_url": "https://api.github.com/users/ierezell/repos",
"events_url": "https://api.github.com/users/ierezell/events{/privacy}",
"received_events_url": "https://api.github.com/users/ierezell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Yeah, This is a good model to go with if the the text sequence is too long.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,609 | 1,609 | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
PRADO is a model made by google, performing as bert with 100x less parameters
[link to the paper](https://www.aclweb.org/anthology/D19-1506.pdf)
[git to the model code](https://github.com/tensorflow/models/tree/master/research/sequence_projection)
## Open source status
* [X] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [X] who are the authors: (mention them, if possible by @gh-username)
Prabhu Kaliamoorthi / Sujith Ravi / Zornitsa Kozareva
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7328/reactions",
"total_count": 16,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7328/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7327 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7327/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7327/comments | https://api.github.com/repos/huggingface/transformers/issues/7327/events | https://github.com/huggingface/transformers/issues/7327 | 706,586,081 | MDU6SXNzdWU3MDY1ODYwODE= | 7,327 | PegasusTokenizer: Newline symbol | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 2368374212,
"node_id": "MDU6TGFiZWwyMzY4Mzc0MjEy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/pegasus",
"name": "pegasus",
"color": "1f76a8",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,600 | 1,606 | 1,606 | CONTRIBUTOR | null | Ported models generate the `<n>` token at the beginning of sentences, whereas ours do not. The pegasus [original code](https://github.com/google-research/pegasus/blob/master/pegasus/ops/public_parsing_ops.py#L40) replaces `\n` newline symbol with `<n>`. `PegasusTokenizer` should probably do this.
```python
_NEWLINE_SYMBOL = "<n>"
text = tf.strings.regex_replace(text, "\n", _NEWLINE_SYMBOL)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7327/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7326 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7326/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7326/comments | https://api.github.com/repos/huggingface/transformers/issues/7326/events | https://github.com/huggingface/transformers/pull/7326 | 706,547,623 | MDExOlB1bGxSZXF1ZXN0NDkxMDc1MjAx | 7,326 | Check decorator order | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger, let me investigate this. @slow should be the same as any other skip decorators, so the order there shouldn't matter. They should be able to stack up. If they don't, it's probably a bug somewhere.\r\n\r\nIt's possible that some other decorators don't play well with our skip decorators, which would require all the skip decorators to be in the last group. But all the ones under our control should be interchangeable order-wise.\r\n\r\nI initially discovered this issue having `@slow`, followed by `@parametrized` and had to swap the order for `@slow` to work.\r\n\r\nI will look at it today.",
"Thanks @stas00!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=h1) Report\n> Merging [#7326](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6bc72c469c38a611fb99c3d61807f59b43fe2c9?el=desc) will **decrease** coverage by `0.37%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7326 +/- ##\n==========================================\n- Coverage 77.40% 77.03% -0.38% \n==========================================\n Files 181 181 \n Lines 34827 34827 \n==========================================\n- Hits 26958 26828 -130 \n- Misses 7869 7999 +130 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `20.38% <0.00%> (-67.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `83.11% <0.00%> (-10.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `83.58% <0.00%> (-8.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.60% <0.00%> (-7.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=footer). Last update [d6bc72c...7290951](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sgugger, please see https://github.com/huggingface/transformers/pull/7334\r\n\r\nImplications for this PR: At the moment the check needs to do that only for `@parameterized.*` - it has to be first. All other skip decorators require no special order. \r\n\r\nFor `@parameterized` we have the following possible imported decorators (let's hope they all are consistently imported):\r\n```\r\n@parameterized\r\[email protected]\r\n@parameterized_class\r\n```\r\nThe full doc is here: https://pypi.org/project/parameterized/\r\n\r\nThere is no problem whatsoever with `@pytest.mark.parametrize` (but it only works with non-unittests) - can use it in any order.\r\n\r\nThat's an awesome validator! Thanks for adding this to our magic toolkit, @sgugger ",
"Ok, I changed the script to detect this then.",
"the swapping of order in the first parts of the PR is not needed, but there is no harm in it either. You can just reset those or not - up to you.",
"Feeling lazy so since it doesn't matter, let's keep those.",
"LGTM"
] | 1,600 | 1,600 | 1,600 | COLLABORATOR | null | As @stas00 pointed out, the slow decorator is ignore if it's not put last. To make sure we don't make the mistake unintentionally and to fix the places where this si not the case, I wrote a script to check the decorators order and fail on `make quality` if there is a wrong order somewhere.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7326/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7326",
"html_url": "https://github.com/huggingface/transformers/pull/7326",
"diff_url": "https://github.com/huggingface/transformers/pull/7326.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7326.patch",
"merged_at": 1600937677000
} |
https://api.github.com/repos/huggingface/transformers/issues/7325 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7325/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7325/comments | https://api.github.com/repos/huggingface/transformers/issues/7325/events | https://github.com/huggingface/transformers/pull/7325 | 706,520,492 | MDExOlB1bGxSZXF1ZXN0NDkxMDUyODU0 | 7,325 | Mark big downloads slow | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,600 | 1,600 | 1,600 | COLLABORATOR | null | This PR adds the slow decorator for models we don't want to download at each CI run. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7325/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7325",
"html_url": "https://github.com/huggingface/transformers/pull/7325",
"diff_url": "https://github.com/huggingface/transformers/pull/7325.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7325.patch",
"merged_at": 1600791713000
} |
https://api.github.com/repos/huggingface/transformers/issues/7324 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7324/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7324/comments | https://api.github.com/repos/huggingface/transformers/issues/7324/events | https://github.com/huggingface/transformers/issues/7324 | 706,514,554 | MDU6SXNzdWU3MDY1MTQ1NTQ= | 7,324 | [s2s] Marian beam search slow for en-de | {
"login": "orendar",
"id": 24236024,
"node_id": "MDQ6VXNlcjI0MjM2MDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/24236024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orendar",
"html_url": "https://github.com/orendar",
"followers_url": "https://api.github.com/users/orendar/followers",
"following_url": "https://api.github.com/users/orendar/following{/other_user}",
"gists_url": "https://api.github.com/users/orendar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orendar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orendar/subscriptions",
"organizations_url": "https://api.github.com/users/orendar/orgs",
"repos_url": "https://api.github.com/users/orendar/repos",
"events_url": "https://api.github.com/users/orendar/events{/privacy}",
"received_events_url": "https://api.github.com/users/orendar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2009457320,
"node_id": "MDU6TGFiZWwyMDA5NDU3MzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/translation",
"name": "translation",
"color": "b2d2f4",
"default": false,
"description": "machine translation utilities and models"
},
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Interesting, a few pointers:\r\n\r\n+ Beam search happens by default -- the fancy eval flags (`--eval_beams=2 --eval_max_gen_length=128 --num_val_sanity_steps=0`) make beam search faster.\r\n+ fp16 should work with or without apex.\r\n+ Shorter sequences make val faster.\r\n+ When the marian models are not well trained, they can get into infinite loops and generate forever. That's why `--eval_max_gen_length` is essential.\r\n\r\nStill, 6-7 minutes to run beam search on 1,000 sentences is shockingly slow. Try running a marian checkpoint on your val set and seeing how long that takes. \r\n\r\n\r\nIf you share a colab \r\n",
"Hey, sorry for not clarifying some things:\r\n\r\n- I'm aware of the eval flags and I did set them such as eval_beams=1, didn't seem to make a difference. I also explicitly specify `--check_val_every_n_epoch=1 --limit_val_batches=1.0 --val_check_interval=1.0` (and checked PL Trainer docs) to make sure that I am doing a single pass over my validation set once every epoch (or X epochs).\r\n- Should I disable apex then? Could it be the culprit?\r\n- Sequence length also did not seem to make much of a difference, something like linear improvement (so from length of 300 and 7 minutes to length of 200 and 5 minutes). The original models are trained with a maximum length of 500 so model should support 300 technically.\r\n- The length flags are all set (for source, target and for train,eval). The fine-tuned model starts off at ~10 BLEU before fine-tuning and ends up at ~45 BLEU after fine-tuning, so clearly the training is working, but the validation still takes ~7 minutes throughout the process.\r\n- Running eval.py using both a pre-trained Marian model and my own fine-tuned version still takes 7 minutes, so the same as the validation that happens during the fine-tuning process.\r\n- Will it help if I share a Colab which reproduces this problem? Should I share it as an .ipynb file or as a link to the actual Colab URL?",
"\r\nCan you send me a run_eval.py command that I can run on my machine that you expect to be faster? This is hopefully going to be the simplest manifestation of the problem. Clearly apex not the culprit if run_eval is slow.\r\n\r\nMarian generation is a much smaller surface area than translation finetuning. Many fewer moving parts.",
"Thanks for the fast response and willingness to help! I made a short notebook that demonstrates the problem [here](https://colab.research.google.com/drive/11HNlWfFjzBJXDEadswwkeEhUzoh6tWFm?usp=sharing). The only meaningful difference from the actual env I run my code on is apex, which as you said should not affect eval speed.",
"I am busy today and tomorrow but will circle back to this if you are still stuck.\r\nOne admittedly unlikely possibility is that the numbers at the beginning of the source sentences throw the model off.\r\n\r\nAnother is that this is expected performance/speed for 1000 examples * 300 tokens * 5 beams. For example, running marian on 2000 shorter wmt examples with max_len=128, 1 GPU takes about 3 minutes. So if the seqlen cost is superlinear, as theory suggests, 6-7 minutes might not be unreasonable. For example, on [CPU](https://huggingface.co/Helsinki-NLP/opus-mt-en-de?text=230%29+There+are+two+corridors+in+the+right+hand+side+wing+of+Malchut%2C+which+divide+from+this+wing+into+two+other+nations%2C+which+are+close+to+Israel+in+the+unification%2C+to+bring+them+into+the+corridors.+Under+the+left+wing+are+two+other+corridors%2C+which+divide+into+two+other+nations%2C+Amon+and+Moav%2C+and+they+are+all+called+%E2%80%9Cliving+soul.%E2%80%9D+Previously+it+was+said+that+there+are+several+corridors%2C+and+here+it+is+said+that+there+are+only+two+on+the+right+and+two+on+the+left.+The+thing+is+that+here+it+is+only+about+the+inclusive%2C+meaning+that+there+are+two+inclusive+corridors+on+the+right%2C+for+the+nations+that+belong+to+the+right%2C+and+there+are+two+inclusive+corridors+on+the+left%2C+for+the+nations+that+belong+to+the+left.+The+two+nations+on+the+right+include+all+the+nations+on+the+right+that+relate+to+the+two+general+corridors+on+the+right+wing%2C+but+The+Zohar+does+not+explain+which+are+they.+The+two+nations+on+the+left+include+all+the+nations+on+the+left%2C+which+are+Amon+and+Moav%2C+and+relate+to+the+two+general+corridors+on+the+left+wing.+All+of+them+are+called+%E2%80%9CLiving+souls.%E2%80%9D+All+the+souls+of+proselytes+that+come+from+all+the+nations+are+called+%E2%80%9Cliving+souls.%E2%80%9D+This+is+because+they+can+receive+only+from+the+Zivug+of+Gadlut+de+ZON%2C+when+ZON+are+in+the+place+of+upper+AVI.+Then+Malchut+is+called+%E2%80%9Cliving+soul%E2%80%9D+because+there+is+the+light+of+AVI+in+her%2C+which+is+the+light+of+Haya.+And+since+the+souls+of+the+proselytes+receive+from+the+wings+of+the+living+soul%2C+they+are+called+%E2%80%9Cliving+souls%2C%E2%80%9D+as+well.) your first examples takes 13 seconds.\r\n\r\nYou could try sentence splitting:\r\n\r\n### Sentence Splitting starter code\r\n```python\r\nSRC = \"\"\"230) There are two corridors in the right hand side wing of Malchut, which divide from this wing into two other nations, which are close to Israel in the unification, to bring them into the corridors. Under the left wing are two other corridors, which divide into two other nations, Amon and Moav, and they are all called “living soul.” Previously it was said that there are several corridors, and here it is said that there are only two on the right and two on the left. The thing is that here it is only about the inclusive, meaning that there are two inclusive corridors on the right, for the nations that belong to the right, and there are two inclusive corridors on the left, for the nations that belong to the left. The two nations on the right include all the nations on the right that relate to the two general corridors on the right wing, but The Zohar does not explain which are they. The two nations on the left include all the nations on the left, which are Amon and Moav, and relate to the two general corridors on the left wing. All of them are called “Living souls.” All the souls of proselytes that come from all the nations are called “living souls.” This is because they can receive only from the Zivug of Gadlut de ZON, when ZON are in the place of upper AVI. Then Malchut is called “living soul” because there is the light of AVI in her, which is the light of Haya. And since the souls of the proselytes receive from the wings of the living soul, they are called “living souls,” as well.\r\n\"\"\"\r\n\r\nmodel = MarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-en-de')\r\ntokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-de')\r\nsplat = SRC.split('.')\r\nbatch = tok.prepare_seq2seq_batch(splat)\r\nbatch1 = tok.prepare_seq2seq_batch([SRC])\r\n# time these\r\ng0 = model.generate(**batch)\r\ng1 = model.generate(**batch1)\r\n```\r\n\r\n\r\n\r\n\r\n",
"Thanks for taking time to check it out. Wanted to add a few more things:\r\n- It's not specific to en-de (I just put a random Marian model in the example).\r\n- It's difficult to split longer sentences in training/validation phase, because often periods or line breaks are in different places and the chunks do not necessarily correspond, so generating such language-paired data from real text is more difficult.\r\n- Longer sentences often contain references and topics which would be lost when breaking them down, and thus the quality of the translation would be degraded.\r\n\r\nRegarding the experiments you've suggested:\r\n\r\n- I tried running eval on 1k length 128 sentences as you suggested, and it took 5.5 minutes without changing the eval parameters and under 2 minutes (~3x faster) when forcing num_beams=1. \r\n\r\n- However when I run fine-tuning, I see the dramatic slowdown I reported before whenever validation is included in the run, with the parameter eval_beams=1 vs eval_beams=5 helping but still not entirely accounting for the issue. I have added apex to the notebook (which seems to be required to run finetuning with fp16, otherwise I get the error `You set 'use_amp=True' but do not have apex installed`), and the full details can be seen [here](https://colab.research.google.com/drive/11HNlWfFjzBJXDEadswwkeEhUzoh6tWFm?usp=sharing).\r\nI use the same 1k dataset for both train and val with MAX_LEN=128 for the model.\r\nTime for 1 epoch without validation: 5 seconds.\r\nTime for 1 epoch with validation with eval_beams=1: ~100 seconds.\r\nTime for 1 epoch with validation with eval_beams=5: ~200 seconds.\r\n\r\nThis seems to indicate that validation/eval is ~20x slower than the actual training as a baseline, which also aligns with my experience from my actual fine-tuning experiments, so I'm wondering if this is expected behavior (for example the train_mbart_enro script performs validation 4 times per epoch, which therefore must be incredibly slow?). If this is the expected performance then feel free to close this issue and I'll just try to run less validation :)",
"+ For summarization, I set `--n_val 100` or so if I want faster validation.\r\n+ the wmt_enro dataset has a very different train to validation ratio: 610,319 vs 2,000. So val is not a dominant fraction of the total cost.\r\n+ In the new `Seq2SeqTrainer` from @patil-suraj, evaluation with beam search will be optional. Hopefully this will improve your experience.",
"I see, thanks again!"
] | 1,600 | 1,600 | 1,600 | NONE | null | Hey @sshleifer (tagging because of translation) - I'm not sure whether I am misunderstanding something or this is an actual issue so apologies, but it seems like validation/eval is significantly slower than training and is a serious bottleneck when fine-tuning translation models.
I am running on Colab with V100, and trying to finetune a MarianMT model on a data set of ~10k sentences of length of up to 300, training on 90% of the data takes about 2 minutes per epoch whereas validation/eval on the remaining 10% of the data takes about 6-7 minutes without any fancy eval flags, no beamsearch etc. This results in fine-tuning being ~5x slower if performing validation every epoch (and still significantly slower even if only performing partial validation or every other epoch etc).
I am using apex and pytorch 1.5.1 as instructed in the readme and in the issues regarding apex fp16 training and bs=16 for both train and validation, different batch sizes did not seem to help. Happy to post more info but the rest is pretty similar to the seq2seq examples. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7324/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7324/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7323 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7323/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7323/comments | https://api.github.com/repos/huggingface/transformers/issues/7323/events | https://github.com/huggingface/transformers/issues/7323 | 706,498,960 | MDU6SXNzdWU3MDY0OTg5NjA= | 7,323 | T5 Cross-attention Decoder - Possible bug with relative_bias | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,600 | 1,601 | 1,601 | MEMBER | null | @patrickvonplaten - investigate whether this is a possible bug: https://github.com/google-research/text-to-text-transfer-transformer/issues/415
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7323/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7322 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7322/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7322/comments | https://api.github.com/repos/huggingface/transformers/issues/7322/events | https://github.com/huggingface/transformers/pull/7322 | 706,486,728 | MDExOlB1bGxSZXF1ZXN0NDkxMDI0OTg1 | 7,322 | Add num workers cli arg | {
"login": "chadykamar",
"id": 8629969,
"node_id": "MDQ6VXNlcjg2Mjk5Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8629969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chadykamar",
"html_url": "https://github.com/chadykamar",
"followers_url": "https://api.github.com/users/chadykamar/followers",
"following_url": "https://api.github.com/users/chadykamar/following{/other_user}",
"gists_url": "https://api.github.com/users/chadykamar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chadykamar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chadykamar/subscriptions",
"organizations_url": "https://api.github.com/users/chadykamar/orgs",
"repos_url": "https://api.github.com/users/chadykamar/repos",
"events_url": "https://api.github.com/users/chadykamar/events{/privacy}",
"received_events_url": "https://api.github.com/users/chadykamar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've addressed the comments regarding the docstring and help message. \r\n\r\nI'm a little less familiar with TensorFlow 2.0, but it seems like any preprocessing is done by the user before passing `train_dataset` and `eval_dataset` to the `TFTrainer` so there isn't an opportunity to set `num_parallel_calls` (I wasn't able to find any calls to `map` or `interleave` save for some markdown examples).",
"Indeed there is no call directly in the trainer and these functions have to be run directly in the example script. Nevertheless, this parameter is still useful as it can be used directly in the example script, instead of updating it manually."
] | 1,600 | 1,600 | 1,600 | CONTRIBUTOR | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #6316
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7322/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7322",
"html_url": "https://github.com/huggingface/transformers/pull/7322",
"diff_url": "https://github.com/huggingface/transformers/pull/7322.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7322.patch",
"merged_at": 1600800283000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.