url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
โŒ€
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
โŒ€
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/10635
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10635/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10635/comments
https://api.github.com/repos/huggingface/transformers/issues/10635/events
https://github.com/huggingface/transformers/pull/10635
828,240,892
MDExOlB1bGxSZXF1ZXN0NTkwMDcwOTMx
10,635
Document Trainer limitation on custom models
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? As discussed in #10629, documenting the limitations of the `Trainer` when working with custom models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10635/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10635/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10635", "html_url": "https://github.com/huggingface/transformers/pull/10635", "diff_url": "https://github.com/huggingface/transformers/pull/10635.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10635.patch", "merged_at": 1615406302000 }
https://api.github.com/repos/huggingface/transformers/issues/10634
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10634/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10634/comments
https://api.github.com/repos/huggingface/transformers/issues/10634/events
https://github.com/huggingface/transformers/issues/10634
828,227,350
MDU6SXNzdWU4MjgyMjczNTA=
10,634
Issues with Multi-GPU
{ "login": "bwang482", "id": 6764450, "node_id": "MDQ6VXNlcjY3NjQ0NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bwang482", "html_url": "https://github.com/bwang482", "followers_url": "https://api.github.com/users/bwang482/followers", "following_url": "https://api.github.com/users/bwang482/following{/other_user}", "gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}", "starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bwang482/subscriptions", "organizations_url": "https://api.github.com/users/bwang482/orgs", "repos_url": "https://api.github.com/users/bwang482/repos", "events_url": "https://api.github.com/users/bwang482/events{/privacy}", "received_events_url": "https://api.github.com/users/bwang482/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Had to remove the following:\r\n\r\n```\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\nn_gpus = torch.cuda.device_count()\r\nif n_gpus > 1:\r\n model = nn.DataParallel(model)\r\nmodel.to(device)\r\n```\r\n\r\nThen everything is running for torch==1.7.1 for both GPUs. So `Trainer()` sorts everything by itself?" ]
1,615
1,615
1,615
NONE
null
- `transformers` version: 4.3.3 - Platform: Linux-4.15.0-132-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes, multi GeForce RTX 2080 Ti GPUs - Using distributed or parallel set-up in script?: DataParallel - NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 I have tried to run the IMDb review sequence classification from https://huggingface.co/transformers/custom_datasets.html on two GPUs, using `DataParallel`: ``` import os os.environ["CUDA_VISIBLE_DEVICES"]="6,7" import time import torch import torch.nn as nn from pathlib import Path from sklearn.model_selection import train_test_split from transformers import DistilBertTokenizerFast from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments def read_imdb_split(split_dir): split_dir = Path(split_dir) texts = [] labels = [] for label_dir in ["pos", "neg"]: for text_file in (split_dir/label_dir).iterdir(): texts.append(text_file.read_text()) labels.append(0 if label_dir is "neg" else 1) return texts, labels class IMDbDataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpus = torch.cuda.device_count() train_texts, train_labels = read_imdb_split('aclImdb/train') test_texts, test_labels = read_imdb_split('aclImdb/test') train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2) tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') train_encodings = tokenizer(train_texts, truncation=True, padding=True) val_encodings = tokenizer(val_texts, truncation=True, padding=True) test_encodings = tokenizer(test_texts, truncation=True, padding=True) train_dataset = IMDbDataset(train_encodings, train_labels) val_dataset = IMDbDataset(val_encodings, val_labels) test_dataset = IMDbDataset(test_encodings, test_labels) training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, ) model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") if n_gpus > 1: model = nn.DataParallel(model) model.to(device) trainer = Trainer( model=model, # the instantiated ๐Ÿค— Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) trainer.train() ``` For **torch==1.8.0**, no matter I use single GPU or multi GPU, I encounter the same CUDA error (shown below). For **torch==1.7.1**, I am able to run the code on single GPU with no issue. However, with multi-GPU, the `Input, output and indices must be on the current device` error occurs (also shown below). With **torch==1.8.0**: ``` 2021-03-10 19:16:07.624155: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_projector.bias'] - This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 0%| | 0/1875 [00:00<?, ?it/s]Traceback (most recent call last): File "test.py", line 80, in <module> trainer.train() File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/trainer.py", line 940, in train tr_loss += self.training_step(model, inputs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/trainer.py", line 1304, in training_step loss = self.compute_loss(model, inputs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/trainer.py", line 1334, in compute_loss outputs = model(**inputs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/_utils.py", line 429, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/_utils.py", line 429, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 623, in forward return_dict=return_dict, File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 487, in forward return_dict=return_dict, File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 309, in forward x=hidden_state, attn_mask=attn_mask, head_mask=head_mask[i], output_attentions=output_attentions File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 256, in forward output_attentions=output_attentions, File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 177, in forward q = shape(self.q_lin(query)) # (bs, n_heads, q_length, dim_per_head) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 94, in forward return F.linear(input, self.weight, self.bias) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/functional.py", line 1753, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)` ``` With **torch==1.7.1**: ``` 2021-03-10 19:22:14.938302: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_projector.bias'] - This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 0%| | 0/1875 [00:00<?, ?it/s]Traceback (most recent call last): File "test.py", line 80, in <module> trainer.train() File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/trainer.py", line 940, in train tr_loss += self.training_step(model, inputs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/trainer.py", line 1304, in training_step loss = self.compute_loss(model, inputs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/trainer.py", line 1334, in compute_loss outputs = model(**inputs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 1 on device 1. Original Traceback (most recent call last): File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 623, in forward return_dict=return_dict, File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 480, in forward inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 107, in forward word_embeddings = self.word_embeddings(input_ids) # (bs, max_seq_length, dim) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 126, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/functional.py", line 1852, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Input, output and indices must be on the current device ``` With **torch==1.5.0**: ``` 2021-03-10 19:23:51.586005: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_projector.bias'] - This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 0%| | 0/1875 [00:00<?, ?it/s]Traceback (most recent call last): File "test.py", line 80, in <module> trainer.train() File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/trainer.py", line 940, in train tr_loss += self.training_step(model, inputs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/trainer.py", line 1304, in training_step loss = self.compute_loss(model, inputs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/trainer.py", line 1334, in compute_loss outputs = model(**inputs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 1 on device 1. Original Traceback (most recent call last): File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 623, in forward return_dict=return_dict, File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 480, in forward inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 107, in forward word_embeddings = self.word_embeddings(input_ids) # (bs, max_seq_length, dim) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/mnt/sdb/env1/lib/python3.6/site-packages/torch/nn/functional.py", line 1724, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorIndex.cu:403 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10634/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10633
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10633/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10633/comments
https://api.github.com/repos/huggingface/transformers/issues/10633/events
https://github.com/huggingface/transformers/pull/10633
828,167,271
MDExOlB1bGxSZXF1ZXN0NTkwMDA1MjM0
10,633
Extend trainer logging for sm
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
# What does this PR do? adds a helper function to `logging.py` to add a native logging handler if needed (`add_handler`). Adds in the `Trainer` a logging `StreamHandler(sys.stdout)` with `sys.stdout` when training is run on sagemaker to forward logs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10633/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10633", "html_url": "https://github.com/huggingface/transformers/pull/10633", "diff_url": "https://github.com/huggingface/transformers/pull/10633.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10633.patch", "merged_at": 1615406029000 }
https://api.github.com/repos/huggingface/transformers/issues/10632
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10632/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10632/comments
https://api.github.com/repos/huggingface/transformers/issues/10632/events
https://github.com/huggingface/transformers/pull/10632
828,116,610
MDExOlB1bGxSZXF1ZXN0NTg5OTU5MTkz
10,632
Ensure metric results are JSON-serializable
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? Metrics returned from numpy (with an `np.mean()` for instance) are not real Python floats but `np.float32` (or other type) objects that are not serializable. This causes problems when the metrics are saved in JSON format in the `Trainer`, for instance when using `load_best_model_at_end`. This PR fixes that by recursively applying a `.item()` on the metrics dictionary. Fixes #10299
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10632/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10632", "html_url": "https://github.com/huggingface/transformers/pull/10632", "diff_url": "https://github.com/huggingface/transformers/pull/10632.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10632.patch", "merged_at": 1615471223000 }
https://api.github.com/repos/huggingface/transformers/issues/10631
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10631/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10631/comments
https://api.github.com/repos/huggingface/transformers/issues/10631/events
https://github.com/huggingface/transformers/issues/10631
828,054,113
MDU6SXNzdWU4MjgwNTQxMTM=
10,631
Help using Speech2Text
{ "login": "xjdeng", "id": 17135596, "node_id": "MDQ6VXNlcjE3MTM1NTk2", "avatar_url": "https://avatars.githubusercontent.com/u/17135596?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xjdeng", "html_url": "https://github.com/xjdeng", "followers_url": "https://api.github.com/users/xjdeng/followers", "following_url": "https://api.github.com/users/xjdeng/following{/other_user}", "gists_url": "https://api.github.com/users/xjdeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/xjdeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xjdeng/subscriptions", "organizations_url": "https://api.github.com/users/xjdeng/orgs", "repos_url": "https://api.github.com/users/xjdeng/repos", "events_url": "https://api.github.com/users/xjdeng/events{/privacy}", "received_events_url": "https://api.github.com/users/xjdeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Your speech loading code is incorrect; instead try the following:\r\n\r\n```python\r\nfrom IPython.display import Audio\r\n\r\nspeech, rate = librosa.load(filename, sr=16000)\r\nAudio(speech, rate=rate)\r\n```", "When I run this line\r\n\r\n`processor = Speech2TextProcessor.from_pretrained(\"facebook/s2t-small-librispeech-asr\")`\r\n\r\nI am getting the following error: \"AttributeError: type object 'Speech2TextProcessor' has no attribute 'from_pretrained'\". Did this part was recently changed in the repository?\r\n\r\nEDIT: sorry, my mistake. The previous installation was causing trouble. After uninstalling everything and installing again it is working fine.", "As @elgeish said, the speech loading code was causing the issue. Glad to know that you resolved it!", "Success! Thanks\n\nOn Wed, Mar 10, 2021, 20:25 rodrigoheck ***@***.***> wrote:\n\n> When I run this line\n>\n> processor =\n> Speech2TextProcessor.from_pretrained(\"facebook/s2t-small-librispeech-asr\")\n>\n> I am getting the following error: \"AttributeError: type object\n> 'Speech2TextProcessor' has no attribute 'from_pretrained'\". Did this part\n> was recently changed in the repository?\n>\n> โ€”\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/10631#issuecomment-796383135>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AECXP3AO6IAKWJYWUGB42BTTDAS2PANCNFSM4Y6OTVBA>\n> .\n>\n", "I am using the maestro dataset(audio files transformed to Pytorch tensors). \r\n**Code:**\r\nif __name__ == '__main__':\r\n METADATA = \"data/processed.csv\"\r\n AUDIO_DIR = \"data\"\r\n SAMPLES = 16000\r\n SR = 16000\r\n if torch.cuda.is_available():\r\n device = \"cuda\"\r\n else:\r\n device = \"cpu\"\r\n\r\n ds = LoadDataset(metadata_file=METADATA,\r\n audio_dir=AUDIO_DIR,\r\n sample_rate=SR,\r\n num_samples=SAMPLES,\r\n device=device\r\n )\r\n dataloader = DataLoader(ds)\r\n dataiter = iter(dataloader)\r\n data = next(dataiter)\r\n features = data\r\n SR = 16000\r\n sample_rate = SR\r\n processor = AutoProcessor.from_pretrained(\r\n \"MIT/ast-finetuned-audioset-10-10-0.4593\")\r\n model = ASTModel.from_pretrained(\"MIT/ast-finetuned-audioset-10-10-0.4593\")\r\n\r\n # audio file is decoded on the fly\r\n inputs = processor(features, sampling_rate=sample_rate, return_tensors=\"pt\")\r\n with torch.no_grad():\r\n outputs = model(**inputs)\r\n\r\n last_hidden_states = outputs.last_hidden_state\r\n print(last_hidden_state) \r\n**Error Message:** \r\nCould not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.\r\nSome weights of the model checkpoint at MIT/ast-finetuned-audioset-10-10-0.4593 were not used when initializing ASTModel: ['classifier.layernorm.bias', 'classifier.layernorm.weight', 'classifier.dense.weight', 'classifier.dense.bias']\r\n- This IS expected if you are initializing ASTModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing ASTModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nTraceback (most recent call last):\r\n File \"model.py\", line 40, in <module>\r\n inputs = processor(features, sampling_rate=sample_rate, return_tensors=\"pt\")\r\n File \"/home/yadgire/.local/lib/python3.7/site-packages/transformers/models/audio_spectrogram_transformer/feature_extraction_audio_spectrogram_transformer.py\", line 183, in __call__\r\n features = [self._extract_fbank_features(waveform, max_length=self.max_length) for waveform in raw_speech]\r\n File \"/home/yadgire/.local/lib/python3.7/site-packages/transformers/models/audio_spectrogram_transformer/feature_extraction_audio_spectrogram_transformer.py\", line 183, in <listcomp>\r\n features = [self._extract_fbank_features(waveform, max_length=self.max_length) for waveform in raw_speech]\r\n File \"/home/yadgire/.local/lib/python3.7/site-packages/transformers/models/audio_spectrogram_transformer/feature_extraction_audio_spectrogram_transformer.py\", line 105, in _extract_fbank_features\r\n frame_shift=10,\r\n File \"/home/yadgire/.local/lib/python3.7/site-packages/torchaudio/compliance/kaldi.py\", line 592, in fbank\r\n waveform, channel, sample_frequency, frame_shift, frame_length, round_to_power_of_two, preemphasis_coefficient\r\n File \"/home/yadgire/.local/lib/python3.7/site-packages/torchaudio/compliance/kaldi.py\", line 143, in _get_waveform_and_window_properties\r\n window_size, len(waveform)\r\nAssertionError: choose a window size 400 that is [2, 1]\r\n@xjdeng Can you please help me out with this?", "@yadgire7\r\n\r\nI no longer use this model for speech to text, use [Whisper](https://huggingface.co/models?other=whisper) instead.\r\n\r\nFor music genre classification, [try converting the audio into spectrograms and training an image classifier on the spectrograms.](https://towardsdatascience.com/audio-deep-learning-made-simple-sound-classification-step-by-step-cebc936bbe5)\r\n\r\nI think you might be able to build a multimodal model that takes both the spectrogram and the transcribed text and tries to classify the music using both inputs, at least I think you could with the Fastai library by defining a text block with an image block though I haven't tried it before.\r\n\r\n" ]
1,615
1,691
1,615
NONE
null
Hey @patil-suraj (and anyone who can help), Sorry, I'm still a beginner compared to the rest of the folks here so sorry if my question is a little basic. But I'm trying to build a pipeline to manually transcribe Youtube videos (that aren't transcribed correctly by Google) and I was considering using your [model ](https://huggingface.co/facebook/s2t-small-librispeech-asr)for it. Here's my unfinished code on [Google Colab](https://colab.research.google.com/drive/15SgSw1KmD-sxdf6Zd953fJIPSmisHd6M?usp=sharing); the last line throws an error: ``` !pip install git+https://github.com/huggingface/transformers !pip install youtube-dl path.py soundfile librosa sentencepiece torchaudio import youtube_dl from path import Path as Path import tempfile import textwrap import librosa import soundfile as sf import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") wrapper = textwrap.TextWrapper(width=70) mydir = tempfile.TemporaryDirectory() dirname = mydir.name + "/tmp.wav" !youtube-dl -o $dirname -ci -f 'bestvideo[ext=mp4]+bestaudio' -x --audio-format wav https://www.youtube.com/watch?v=d5yfUuHYWho filename = dirname + ".wav" speech, rate = sf.read(filename) speech = librosa.resample(speech.T, rate, 16000) features = processor(speech, sampling_rate=16000, padding=True, return_tensors="pt") ``` And here's the error produced: ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-8-8fc3e2d943e0> in <module>() ----> 1 features = processor(speech, sampling_rate=16000, padding=True, return_tensors="pt") 5 frames /usr/local/lib/python3.7/dist-packages/torchaudio/compliance/kaldi.py in _get_waveform_and_window_properties(waveform, channel, sample_frequency, frame_shift, frame_length, round_to_power_of_two, preemphasis_coefficient) 147 assert 2 <= window_size <= len( 148 waveform), ('choose a window size {} that is [2, {}]' --> 149 .format(window_size, len(waveform))) 150 assert 0 < window_shift, '`window_shift` must be greater than 0' 151 assert padded_window_size % 2 == 0, 'the padded `window_size` must be divisible by two.' \ AssertionError: choose a window size 400 that is [2, 2] ``` Can anyone point me in the right direction? Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10631/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10630
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10630/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10630/comments
https://api.github.com/repos/huggingface/transformers/issues/10630/events
https://github.com/huggingface/transformers/issues/10630
827,858,299
MDU6SXNzdWU4Mjc4NTgyOTk=
10,630
I get different results everytime I run run_squad.py
{ "login": "nrjvarshney", "id": 19836137, "node_id": "MDQ6VXNlcjE5ODM2MTM3", "avatar_url": "https://avatars.githubusercontent.com/u/19836137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nrjvarshney", "html_url": "https://github.com/nrjvarshney", "followers_url": "https://api.github.com/users/nrjvarshney/followers", "following_url": "https://api.github.com/users/nrjvarshney/following{/other_user}", "gists_url": "https://api.github.com/users/nrjvarshney/gists{/gist_id}", "starred_url": "https://api.github.com/users/nrjvarshney/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nrjvarshney/subscriptions", "organizations_url": "https://api.github.com/users/nrjvarshney/orgs", "repos_url": "https://api.github.com/users/nrjvarshney/repos", "events_url": "https://api.github.com/users/nrjvarshney/events{/privacy}", "received_events_url": "https://api.github.com/users/nrjvarshney/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
Is it possible to have deterministic results using the run_squad.py script? It has the set_seed() method but still, it gives different results every time I run it. How can I get same results across all runs?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10630/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10629
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10629/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10629/comments
https://api.github.com/repos/huggingface/transformers/issues/10629/events
https://github.com/huggingface/transformers/issues/10629
827,535,935
MDU6SXNzdWU4Mjc1MzU5MzU=
10,629
Using `label` in Trainer leads to TypeError
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @sgugger ", "Yes, the default data collator always change `label` to `labels` because Hugging Face models expect that argument while Hugging Face datasets usually have `label`. You can work around this by using the default data collator of PyTorch and pass it to the `Trainer`, but you should make your model work like the one in Transformers to avoid any other issues we didn't think of (so use a `labels` argument and always return tuples).\r\n\r\nI'll see what I can do to avoid this specific bug in the future.", "Thanks a lot @sgugger, @LysandreJik. Should I close this issue?\r\n\r\nEDIT:\r\n\r\nI think that this could be mentioned in the docs where custom Trainer/TrainingArguments are discussed. What do you think?", "Like I said, will try to solve the bug in itself. My recommendation was more in general to avoid any other bugs :-) ", "Thanks again @sgugger :) " ]
1,615
1,615
1,615
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.3.3 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.0+cu101 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Using GPU in script?: Not explicitly. - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik ## Information My dataset is defined as follows: ```python """Implements MNIST Dataset""" from torch.utils.data import Dataset from torchvision import datasets, transforms from torchvision.transforms import Grayscale, ToTensor, Normalize class Mnist(Dataset): def __init__(self, config): self.config = config transformations = [Grayscale(num_output_channels=1),ToTensor(),Normalize(mean=[0.0],std=[1.0])] self.transform = ( transforms.Compose(transformations) ) self.dataset = datasets.MNIST( config.load_dataset_args.path, download=True, train=self.config.split == "train", transform=self.transform, ) def __len__(self): return len(self.dataset) def __getitem__(self, example_idx): # essential to return as dict, hence the roundabout way of loading the dataset img, label = self.dataset[example_idx] return {"image": img, "label": label} ``` Model I am using - a custom CNN, defined as follows: ```python """Implementation of a custom CNN with random weights.""" from torch.nn import ( BatchNorm2d, Conv2d, Linear, MaxPool2d, Module, ReLU, Sequential, CrossEntropyLoss, ) class SimpleCnn(Module): def __init__(self): super(SimpleCnn, self).__init__() self.cnn_layers = Sequential( Conv2d(1, 32, kernel_size=3, stride=1, padding=1), BatchNorm2d(32), ReLU(), MaxPool2d(kernel_size=2, stride=2), Conv2d(32, 32, kernel_size=3, stride=1, padding=1), BatchNorm2d(32), ReLU(), MaxPool2d(kernel_size=2, stride=2), Conv2d(32, 32, kernel_size=3, stride=1, padding=1), BatchNorm2d(32), ReLU(), MaxPool2d(kernel_size=2, stride=2), ) self.linear_layers = Linear(32 * 3 * 3, 10) self.loss_fn = CrossEntropyLoss() def forward(self, image, label=None): out = self.cnn_layers(image) out = out.view(out.size(0), -1) out = self.linear_layers(out) if label is not None: loss = self.loss_fn(out, label) return loss, out return out ``` The problem arises when using: Trainer with a custom `label_names` as `['label']`. I provide `label_names` as `['label']` in `TrainingArguments`. The following error occurs on `trainer.train()`: ```python Traceback (most recent call last): File "hf_train.py", line 97, in <module> trainer.train() File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 943, in train tr_loss += self.training_step(model, inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1307, in training_step loss = self.compute_loss(model, inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1337, in compute_loss outputs = model(**inputs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'labels' ``` I tried printing batch keys, using `torch.utils.data.DataLoader` inside, and after the function call to `get_train_dataloader` in `trainer.py`: ``` dict_keys(['image', 'label']) #Inside dict_keys(['labels', 'image']) #Immediately after call ``` I don't understand how it gets converted to `labels` on its own. ## To reproduce Steps to reproduce the behavior: 1. Load any dataset with one output key as `['label']`. 2. Provide `['label']` as the label_names to `TrainingArguments` 3. Run `trainer.train()`. One can also try using `load_dataset('mnist')` directly from the `datasets` library. This error will get thrown. This is not expected. Strangely enough, changing every `'label'` to `'class_label'` or `'labels'` works perfectly. I don't know why this would happen.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10629/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10628
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10628/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10628/comments
https://api.github.com/repos/huggingface/transformers/issues/10628/events
https://github.com/huggingface/transformers/issues/10628
827,443,113
MDU6SXNzdWU4Mjc0NDMxMTM=
10,628
expanduser path in Trainer
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sounds reasonable. Would you like to make a PR with this change?", "Can do.\r\nI should expand the path in TrainingArguments right? for logging_dir also?", "Yes, `output_dir` and `logging_dir`, preferable in the postinit of `TrainingArguments` so it's done as early as possible." ]
1,615
1,615
1,615
CONTRIBUTOR
null
the `output_dir` passed to TrainingArguments is not expanded (the behaviour is probably the same for logging_dir) ### Who can help Library: - trainer: @sgugger ## To reproduce Directly using os.makedirs but this is what happens in Trainer ```py In [7]: !mkdir ~/foo In [8]: !cd ~/foo /mnt/beegfs/home/lerner/foo In [10]: os.makedirs("~/bar") In [14]: !realpath "~/bar" /mnt/beegfs/home/lerner/foo/~/bar ``` ## To fix Call os.path.expanduser before making dir ```py In [10]: os.makedirs(os.path.expanduser("~/bar")) In [18]: cd /mnt/beegfs/home/lerner In [21]: !realpath bar /mnt/beegfs/home/lerner/bar ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10628/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10627
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10627/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10627/comments
https://api.github.com/repos/huggingface/transformers/issues/10627/events
https://github.com/huggingface/transformers/issues/10627
827,440,372
MDU6SXNzdWU4Mjc0NDAzNzI=
10,627
considering `pad_to_multiple_of` for run_mlm.py
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, we could use that when the line by line option is set (otherwise there is just no padding). Would you like to make a PR with this?" ]
1,615
1,617
1,617
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help @sgugger, @patil-suraj ## Information I have seen in huggingface codes such as run_seq2seq.py that for the case of fp16, they pad to multiple of 8 like below, perhpas for efficiency purpose: ``` data_collator = DataCollatorForSeq2Seq( tokenizer, label_pad_token_id=label_pad_token_id, pad_to_multiple_of=8 if training_args.fp16 else None, ) ``` For run_mlm.py codes, the padding condition in case of fp16 is not considered, could it be a bug and possibly performance could get better in run_mlm.py if this was set? kindly appreciate having a look. thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10627/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10626
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10626/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10626/comments
https://api.github.com/repos/huggingface/transformers/issues/10626/events
https://github.com/huggingface/transformers/issues/10626
827,371,372
MDU6SXNzdWU4MjczNzEzNzI=
10,626
Average checkpoints
{ "login": "mudong0419", "id": 24379054, "node_id": "MDQ6VXNlcjI0Mzc5MDU0", "avatar_url": "https://avatars.githubusercontent.com/u/24379054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mudong0419", "html_url": "https://github.com/mudong0419", "followers_url": "https://api.github.com/users/mudong0419/followers", "following_url": "https://api.github.com/users/mudong0419/following{/other_user}", "gists_url": "https://api.github.com/users/mudong0419/gists{/gist_id}", "starred_url": "https://api.github.com/users/mudong0419/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mudong0419/subscriptions", "organizations_url": "https://api.github.com/users/mudong0419/orgs", "repos_url": "https://api.github.com/users/mudong0419/repos", "events_url": "https://api.github.com/users/mudong0419/events{/privacy}", "received_events_url": "https://api.github.com/users/mudong0419/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,615
1,615
1,615
NONE
null
Is it possible to average weights of several checkpoints? Some thing like https://github.com/pytorch/fairseq/blob/master/scripts/average_checkpoints.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10626/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10625
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10625/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10625/comments
https://api.github.com/repos/huggingface/transformers/issues/10625/events
https://github.com/huggingface/transformers/issues/10625
827,318,386
MDU6SXNzdWU4MjczMTgzODY=
10,625
Model "deberta-v2--xxlarge-mnli" doesn't work!!!
{ "login": "ngoquanghuy99", "id": 36761076, "node_id": "MDQ6VXNlcjM2NzYxMDc2", "avatar_url": "https://avatars.githubusercontent.com/u/36761076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ngoquanghuy99", "html_url": "https://github.com/ngoquanghuy99", "followers_url": "https://api.github.com/users/ngoquanghuy99/followers", "following_url": "https://api.github.com/users/ngoquanghuy99/following{/other_user}", "gists_url": "https://api.github.com/users/ngoquanghuy99/gists{/gist_id}", "starred_url": "https://api.github.com/users/ngoquanghuy99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ngoquanghuy99/subscriptions", "organizations_url": "https://api.github.com/users/ngoquanghuy99/orgs", "repos_url": "https://api.github.com/users/ngoquanghuy99/repos", "events_url": "https://api.github.com/users/ngoquanghuy99/events{/privacy}", "received_events_url": "https://api.github.com/users/ngoquanghuy99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No, DeBERTa-v2 is not available in v4.3.3, it's only available from source as of now. Version v4.4.0 should be released end of this week or early next week, and will have DeBERTa-v2.", "Yes, exactly what i think! Thanks Lysandre for confirming this." ]
1,615
1,615
1,615
CONTRIBUTOR
null
Whenever i try to load the tokenizer by ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('microsoft/deberta-v2-xxlarge-mnli') ``` it returns this issue: ``` config_class = CONFIG_MAPPING[config_dict["model_type"]] KeyError: 'deberta-v2' ``` Is this model not available in Transformers 4.3.3 (the latest release)?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10625/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10624
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10624/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10624/comments
https://api.github.com/repos/huggingface/transformers/issues/10624/events
https://github.com/huggingface/transformers/pull/10624
827,065,189
MDExOlB1bGxSZXF1ZXN0NTg5MDExODY1
10,624
Copy tokenizer files in each of their repo
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Love it!\r\n\r\nMaybe a good practice to link to a sample of the related commits on hf.co: for instance here https://huggingface.co/facebook/bart-base/commit/c2469fb7e666a5c5629a161f17c9ef23c85217f7", "I think I did around 50 of them in various repos to move all the tokenizers files, so a bit hard to keep track of all of them.", "Yep just link one, or a small sample.\r\n\r\nMakes it easier to see what this PR entails on hf-hub side" ]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? This PR cleans the maps in the tokenizer files to make sure each checkpoint has the proper tokenization files. This will allow us to remove custom code that mapped some checkpoints to special files (like BART using RoBERTa vocab files) and take full advantage of the versioning systems for those checkpoints. All checkpoints changed have been properly copied in the corresponding model repos in parallel. For instance, to accomodate the move on the fast BART tokenizers, the following commits have been on the model hub: - in [facebook/bart-base](https://huggingface.co/facebook/bart-base/commit/c2469fb7e666a5c5629a161f17c9ef23c85217f7) - in [facebook/bart-large](https://huggingface.co/facebook/bart-large/commit/22fa33834dccc11df99c4fc5fcc96c67f806dfdb) - in [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli/commit/6a35c499ad1087bad8d9c348a05b1fa10c5ad47d) - in [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn/commit/18614750f248f300641757e8e44e6afce801d664) - in [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum/commit/96ea79a741cd376cdc8a740b225330773da151f0) - in [yjernite/bart_eli5](https://huggingface.co/yjernite/bart_eli5/commit/38797dd2ef06f5542c6f7db853518703f6b3da21) In the PR I've also uniformized the way the maps are structured across models, to make it easier to alter (and ultimately remove) them in the future via automatic scripts.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10624/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10624", "html_url": "https://github.com/huggingface/transformers/pull/10624", "diff_url": "https://github.com/huggingface/transformers/pull/10624.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10624.patch", "merged_at": 1615393583000 }
https://api.github.com/repos/huggingface/transformers/issues/10623
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10623/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10623/comments
https://api.github.com/repos/huggingface/transformers/issues/10623/events
https://github.com/huggingface/transformers/issues/10623
827,011,734
MDU6SXNzdWU4MjcwMTE3MzQ=
10,623
Invalid pytorch_model.bin for TAPAS-large
{ "login": "saichandrapandraju", "id": 41769919, "node_id": "MDQ6VXNlcjQxNzY5OTE5", "avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saichandrapandraju", "html_url": "https://github.com/saichandrapandraju", "followers_url": "https://api.github.com/users/saichandrapandraju/followers", "following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}", "gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}", "starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions", "organizations_url": "https://api.github.com/users/saichandrapandraju/orgs", "repos_url": "https://api.github.com/users/saichandrapandraju/repos", "events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}", "received_events_url": "https://api.github.com/users/saichandrapandraju/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, not sure why this happens, cc @julien-c.\r\n\r\nA workaround is to load the model using `model = TapasModel.from_pretrained(\"google/tapas-base\")` and then use `model.save_pretrained(\"./\")` to save the `config.json` and `pytorch_model.bin` file to a local directory. ", "I don't remember how those models were uploaded so not sure why this is happening.\r\n\r\ncc'ing @Pierrci for visibility\r\n\r\nIn the meantime you can just rename the file to .bin", "@saichandrapandraju the file on tapas-large is a bin file, but since pytorch 1.6.0, bin files are now zip-based. You can check the documentation of [torch.save](https://pytorch.org/docs/stable/generated/torch.save.html) which states:\r\n\r\n> The 1.6 release of PyTorch switched torch.save to use a new zipfile-based file format. torch.load still retains the ability to load files in the old format. If for any reason you want torch.save to use the old format, pass the kwarg _use_new_zipfile_serialization=False.\r\n\r\nAre you using a torch version inferior to 1.6.0 to try to load the models?", "As pointed out by @julien-c, it is actually downloaded as a zip file, which seems to be the case for several models (my guess is that it does that when the file is zip-based, like all models saved with torch >1.6).\r\n\r\nDownloading through git doesn't have that issue.", "Ok will close this as:\r\n- we found the root cause but it's outside our control (`lfs` adds an auto content-type)\r\n- there are several workarounds like `git clone`ing the repo or just renaming the file (`from_pretrained` also works as usual)\r\n\r\nThanks for investigating @Pierrci and @LysandreJik ๐Ÿฅ‡ " ]
1,615
1,615
1,615
NONE
null
Hi, I was downloading `google/tapas-large` binaries from [HF models](https://huggingface.co/google/tapas-large/tree/main) and for pytorch_model.bin, a zip file was getting downloaded which is not a `bin` file like other models. folder structure is like - `archive/data/.*File`, `archive/data.pkl` ,` archive/version.File`. Same for tapas-base as well. These cannot be used for loading a model (`TapasForQuestionAnswering.from_pretrained('path/to/binaries directory')`) plz suggest how to proceed further..
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10623/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10622
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10622/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10622/comments
https://api.github.com/repos/huggingface/transformers/issues/10622/events
https://github.com/huggingface/transformers/issues/10622
827,003,569
MDU6SXNzdWU4MjcwMDM1Njk=
10,622
wav2vec2: adding single-char tokens to tokenizer causes tokenization mistakes
{ "login": "elgeish", "id": 6879673, "node_id": "MDQ6VXNlcjY4Nzk2NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elgeish", "html_url": "https://github.com/elgeish", "followers_url": "https://api.github.com/users/elgeish/followers", "following_url": "https://api.github.com/users/elgeish/following{/other_user}", "gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}", "starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elgeish/subscriptions", "organizations_url": "https://api.github.com/users/elgeish/orgs", "repos_url": "https://api.github.com/users/elgeish/repos", "events_url": "https://api.github.com/users/elgeish/events{/privacy}", "received_events_url": "https://api.github.com/users/elgeish/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "My workaround right now is to keep a reference to the original `tokenizer.unique_no_split_tokens` before adding tokens then restoring it afterwards:\r\n\r\n```python\r\nfrom transformers import Wav2Vec2Processor\r\n\r\ntokenizer = Wav2Vec2Processor.from_pretrained('facebook/wav2vec2-base').tokenizer\r\nunique_no_split_tokens = tokenizer.unique_no_split_tokens\r\ntokenizer.add_tokens('x')\r\ntokenizer.unique_no_split_tokens = unique_no_split_tokens\r\ntoken_ids = tokenizer('C x A').input_ids\r\ndecoded = tokenizer.decode(token_ids)\r\nprint(decoded, token_ids)\r\n# C x A [19, 4, 32, 4, 7]\r\n```", "Hey @elgeish,\r\n\r\nSorry for replying that late! Yes, you are absolutely right here :-)\r\nI think we should overwrite the `add_tokens(self, ...)` function in \r\n`src/transformers/models/wav2vec2/tokenization_wav2vec2.py` with the \"hack\" just as you did:\r\n\r\n```python\r\ndef _add_tokens(self, new_tokens: Union[List[str], List[AddedToken]], special_tokens: bool = False) -> int:\r\n # copy past the function from `src/transformers/tokenization_utils.py` \r\n # + add the \"hack\": \r\n unique_no_split_tokens = tokenizer.unique_no_split_tokens\r\n tokenizer.unique_no_split_tokens = unique_no_split_tokens\r\n```\r\n\r\nIf you want and have some time, it would be amazing if you could open a PR :-)\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hey @patrickvonplaten,\r\nShall I open a PR for this issue.", "Hey @Muktan,\r\n\r\nyes this would be great :-)" ]
1,615
1,620
1,620
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.4.0.dev0 - Platform: Linux-5.8.0-44-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: N/A - Using distributed or parallel set-up in script?: N/A ### Who can help @patrickvonplaten and @LysandreJik Issue is probably related to interactions of the following: https://github.com/huggingface/transformers/blob/9a8c168f56fe3c0e21d554a577ac03beb004ef89/src/transformers/tokenization_utils.py#L213 https://github.com/huggingface/transformers/blob/11fdde02719dbd20651c9f43cc6f54959fc6ede6/src/transformers/tokenization_utils.py#L352 https://github.com/huggingface/transformers/blob/cb38ffcc5e0ae2fac653342ac36dc75c15ea178f/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L184 This is a corner case: `add_tokens` adds new tokens to `self.unique_no_split_tokens` -- causing `tokenize()` to skip calling `Wav2Vec2CTCTokenizer._tokenize()` This is probably not the case with most tokenizers since their vocab includes most, if not all, commonly used single-characters tokens without including them in `self.unique_no_split_tokens`. I faced this while debugging my code for https://github.com/huggingface/transformers/pull/10581 to add support for Buckwalter Arabic transliteration. The issue is not limited to adding single-char tokens but rather when words (space-separated) start or end with a newly added token. ## Information Model I am using (Bert, XLNet ...): wav2vec2 The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: adding tokens to ASR vocab The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: training an ASR with extended vocab ## To reproduce Steps to reproduce the behavior: ```python from transformers import Wav2Vec2Processor tokenizer = Wav2Vec2Processor.from_pretrained('facebook/wav2vec2-base').tokenizer tokenizer.add_tokens('x') token_ids = tokenizer('C x A').input_ids decoded = tokenizer.decode(token_ids) print(decoded, token_ids) # CxA [19, 32, 7] ``` ## Expected behavior Should have printed `C x A [19, 4, 32, 4, 7]`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10622/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10621
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10621/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10621/comments
https://api.github.com/repos/huggingface/transformers/issues/10621/events
https://github.com/huggingface/transformers/pull/10621
826,890,988
MDExOlB1bGxSZXF1ZXN0NTg4ODU1ODE3
10,621
Fixes an issue in `text-classification` where MNLI eval/test datasets are not being preprocessed.
{ "login": "allenwang28", "id": 9057208, "node_id": "MDQ6VXNlcjkwNTcyMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/9057208?v=4", "gravatar_id": "", "url": "https://api.github.com/users/allenwang28", "html_url": "https://github.com/allenwang28", "followers_url": "https://api.github.com/users/allenwang28/followers", "following_url": "https://api.github.com/users/allenwang28/following{/other_user}", "gists_url": "https://api.github.com/users/allenwang28/gists{/gist_id}", "starred_url": "https://api.github.com/users/allenwang28/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/allenwang28/subscriptions", "organizations_url": "https://api.github.com/users/allenwang28/orgs", "repos_url": "https://api.github.com/users/allenwang28/repos", "events_url": "https://api.github.com/users/allenwang28/events{/privacy}", "received_events_url": "https://api.github.com/users/allenwang28/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
# What does this PR do? In https://github.com/huggingface/transformers/commit/dfd16af8322788e6dd58e8396e0d6f2f5312bf99 for `run_glue.py`, `{train|eval|test}_dataset` was split out and preprocessed individually. However, this misses `datasets["{validation|test}_mismatched"]` which is appended to the `{eval|test}_dataset` only when MNLI is used. When running evaluation on MNLI, that means we eventually hit an un-preprocessed dataset which leads to a stack trace like this: ``` Traceback (most recent call last): File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn fn(gindex, *args) File "/transformers/examples/text-classification/run_glue.py", line 532, in _mp_fn main() File "/transformers/examples/text-classification/run_glue.py", line 493, in main metrics = trainer.evaluate(eval_dataset=eval_dataset) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1657, in evaluate metric_key_prefix=metric_key_prefix, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1788, in prediction_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1899, in prediction_step loss, outputs = self.compute_loss(model, inputs, return_outputs=True) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1458, in compute_loss outputs = model(**inputs) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl return forward_call(*input, **kwargs) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 625, in forward return_dict=return_dict, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl return forward_call(*input, **kwargs) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 471, in forward raise ValueError("You have to specify either input_ids or inputs_embeds") ValueError: You have to specify either input_ids or inputs_embeds ``` This commit resolves this by moving the `dataset.map(preprocess...)` to the beginning. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # 10620 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10621/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10621", "html_url": "https://github.com/huggingface/transformers/pull/10621", "diff_url": "https://github.com/huggingface/transformers/pull/10621.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10621.patch", "merged_at": 1615346025000 }
https://api.github.com/repos/huggingface/transformers/issues/10620
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10620/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10620/comments
https://api.github.com/repos/huggingface/transformers/issues/10620/events
https://github.com/huggingface/transformers/issues/10620
826,890,321
MDU6SXNzdWU4MjY4OTAzMjE=
10,620
MNLI eval/test dataset is not being preprocessed in `run_glue.py`
{ "login": "allenwang28", "id": 9057208, "node_id": "MDQ6VXNlcjkwNTcyMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/9057208?v=4", "gravatar_id": "", "url": "https://api.github.com/users/allenwang28", "html_url": "https://github.com/allenwang28", "followers_url": "https://api.github.com/users/allenwang28/followers", "following_url": "https://api.github.com/users/allenwang28/following{/other_user}", "gists_url": "https://api.github.com/users/allenwang28/gists{/gist_id}", "starred_url": "https://api.github.com/users/allenwang28/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/allenwang28/subscriptions", "organizations_url": "https://api.github.com/users/allenwang28/orgs", "repos_url": "https://api.github.com/users/allenwang28/repos", "events_url": "https://api.github.com/users/allenwang28/events{/privacy}", "received_events_url": "https://api.github.com/users/allenwang28/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Fixed by #10621 \r\nThanks for flagging and fixing :-)" ]
1,615
1,615
1,615
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.0.dev0 - Platform: Linux-4.9.0-14-amd64-x86_64-with-debian-9.13 - Python version: 3.6.10 - PyTorch version (GPU?): 1.8.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no, using TPU - Using distributed or parallel set-up in script?: distributed ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> N/A, I have a fix upcoming ๐Ÿ‘ ## Information Model I am using (Bert, XLNet ...): Any model within `examples/text-classification/run_glue.py` that uses MNLI The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (MNLI) * [ ] my own task or dataset: (give details below) Essentially, the issue is that in https://github.com/huggingface/transformers/commit/dfd16af8322788e6dd58e8396e0d6f2f5312bf99 for `run_glue.py`, `{train|eval|test}_dataset` was split out and preprocessed individually. However, this misses `datasets["{validation|test}_mismatched"]` which is appended to the `{eval|test}_dataset` only when MNLI is used. ## To reproduce Steps to reproduce the behavior: 1. Run the `run_glue.py` example on an MNLI dataset and include eval. The full command I'm using on a v2-8 TPU is: ``` python examples/xla_spawn.py --num_cores 8 examples/text-classification/run_glue.py --logging_dir=./tensorboard-metrics --task_name MNLI --cache_dir ./cache_dir --do_eval --max_seq_length 128 --learning_rate 3e-5 --output_dir MNLI --logging_steps 30 --save_steps 3000 --tpu_metrics_debug --model_name_or_path bert-base-cased --per_device_eval_batch_size 64 --overwrite_output_dir ``` This results in: ``` Traceback (most recent call last): File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn fn(gindex, *args) File "/transformers/examples/text-classification/run_glue.py", line 532, in _mp_fn main() File "/transformers/examples/text-classification/run_glue.py", line 493, in main metrics = trainer.evaluate(eval_dataset=eval_dataset) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1657, in evaluate metric_key_prefix=metric_key_prefix, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1788, in prediction_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1899, in prediction_step loss, outputs = self.compute_loss(model, inputs, return_outputs=True) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1458, in compute_loss outputs = model(**inputs) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl return forward_call(*input, **kwargs) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 625, in forward return_dict=return_dict, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl return forward_call(*input, **kwargs) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 471, in forward raise ValueError("You have to specify either input_ids or inputs_embeds") ValueError: You have to specify either input_ids or inputs_embeds ``` ## Expected behavior Dataset should be preprocessed for the entirety of the dataset. Fix: https://github.com/huggingface/transformers/pull/10621
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10620/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10619
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10619/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10619/comments
https://api.github.com/repos/huggingface/transformers/issues/10619/events
https://github.com/huggingface/transformers/issues/10619
826,887,608
MDU6SXNzdWU4MjY4ODc2MDg=
10,619
wav2vec2: `convert_tokens_to_string` contracts legitimately repeated characters
{ "login": "elgeish", "id": 6879673, "node_id": "MDQ6VXNlcjY4Nzk2NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elgeish", "html_url": "https://github.com/elgeish", "followers_url": "https://api.github.com/users/elgeish/followers", "following_url": "https://api.github.com/users/elgeish/following{/other_user}", "gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}", "starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elgeish/subscriptions", "organizations_url": "https://api.github.com/users/elgeish/orgs", "repos_url": "https://api.github.com/users/elgeish/repos", "events_url": "https://api.github.com/users/elgeish/events{/privacy}", "received_events_url": "https://api.github.com/users/elgeish/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can you try this?\r\n\r\n```python\r\nfrom transformers import Wav2Vec2Processor\r\n\r\ntokenizer = Wav2Vec2Processor.from_pretrained('facebook/wav2vec2-base').tokenizer\r\ntokenizer.decode(tokenizer('CARRY').input_ids, group_tokens=False)\r\n# CARRY\r\n```\r\n\r\nBecause we need to decode the predicted tokens with CTC, `\"RR\"` is decoded to `\"R\"` by default. See this blog post for more information: https://distill.pub/2017/ctc/", "Yeah looks good! Thanks!", "By the way, I totally get why it's needed for CTC, I mistook it for the tokenizer used to decode the final results but noticed it wasn't the case. The final results work as expected. Sorry for the false alarm!" ]
1,615
1,615
1,615
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.4.0.dev0 - Platform: Linux-5.8.0-44-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: N/A - Using distributed or parallel set-up in script?: N/A ### Who can help @patrickvonplaten - issue is most probably due to https://github.com/huggingface/transformers/blob/cb38ffcc5e0ae2fac653342ac36dc75c15ea178f/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L203 ## Information Model I am using (Bert, XLNet ...): wav2vec2 The problem arises when using: * [x] the official example scripts: run_asr.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: wav2vec2 * [ ] my own task or dataset: (give details below) ## To reproduce ```python from transformers import Wav2Vec2Processor tokenizer = Wav2Vec2Processor.from_pretrained('facebook/wav2vec2-base').tokenizer tokenizer.decode(tokenizer('CARRY').input_ids) # CARY ``` Decoder should have returned `'CARRY'` instead.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10619/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10618
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10618/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10618/comments
https://api.github.com/repos/huggingface/transformers/issues/10618/events
https://github.com/huggingface/transformers/issues/10618
826,764,486
MDU6SXNzdWU4MjY3NjQ0ODY=
10,618
Run_qa crashes because of parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
{ "login": "spacemanidol", "id": 3886120, "node_id": "MDQ6VXNlcjM4ODYxMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/3886120?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spacemanidol", "html_url": "https://github.com/spacemanidol", "followers_url": "https://api.github.com/users/spacemanidol/followers", "following_url": "https://api.github.com/users/spacemanidol/following{/other_user}", "gists_url": "https://api.github.com/users/spacemanidol/gists{/gist_id}", "starred_url": "https://api.github.com/users/spacemanidol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spacemanidol/subscriptions", "organizations_url": "https://api.github.com/users/spacemanidol/orgs", "repos_url": "https://api.github.com/users/spacemanidol/repos", "events_url": "https://api.github.com/users/spacemanidol/events{/privacy}", "received_events_url": "https://api.github.com/users/spacemanidol/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is weird and linked to your environment somehow. \r\n@stas00 Was this the error you encountered when `dataclasses` is installed in Python 3.7 or was it a different one?", "no, that was not that error. I tested `run_qa.py` w/ dataclasses on py38 and it didn't fail.\r\n\r\nthe datasets error was: `AttributeError: module 'typing' has no attribute '_ClassVar'`\r\n\r\nhttps://github.com/huggingface/transformers/issues/8638", "I just tried this on 2 new servers with a fresh conda environment and reproduced behavior. \r\nSteps.\r\n```bash\r\nconda create -n test python=3.8\r\nconda activate test\r\npip install transformers datasets torch\r\npython run_qa.py --model_name_or_path bert-base-uncased --dataset_name squad --do_train --per_device_train_batch_size 8 --learning_rate 3e-5 --max_seq_length 384 --doc_stride 128 --output_dir bert-base-uncased-qa/ --overwrite_output_dir --cache_dir cache --preprocessing_num_workers 4 --seed 42 --num_train_epochs 1\r\n```\r\n", "I have also reproed with venv and regular environment on multiple machines", "The suggested commands work fine on my side, so can't reproduce the issue. ", "I have pushed a fix (on master by mistake but it's pretty harmless) a tentative fix to remove the line that caused you problem and replace it by a regex. Let me know if it fixes your issue or not (I can't confirm myself since I can't reproduce).", "FWIW, I followed your new conda env steps and couldn't reproduce the problem.\r\n\r\n@spacemanidol, fyi I edited your comment to fix the conda create line as it had the commands reversed.", "Can confirm this works. " ]
1,615
1,615
1,615
NONE
null
## Environment info - `transformers` version: 4.3.3 - Platform: linux - Python version:3.7, 3.8, 3.9 reproed across all three - PyTorch version (GPU?): 1.7, tried 1.8 with same behavior - Tensorflow version (GPU?):N/A - Using GPU in script?: yes - Using distributed or parallel set-up in script?: Yes 2 gpu ### Who can help @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): bert-base-uncased The problem arises when using: * [ X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ X] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) SQUAD 1.0 ## To reproduce Steps to reproduce the behavior: 1. Install clean transformers environment 2. run the run_qa.py script with instructions as specified 3. crash If you go ahead and create a new environment and install the most recent version of the transformer and try to run the run_qa.py script(SQUAD) it crashes because of a parser issue. python run_qa.py --model_name_or_path bert-base-uncased --dataset_name squad --do_train --per_device_train_batch_size 8 --learning_rate 3e-5 --max_seq_length 384 --doc_stride 128 --output_dir output --overwrite_output_dir --cache_dir cache --preprocessing_num_workers 4 --seed 42 --num_train_epochs 1 Traceback (most recent call last): File "run_qa.py", line 1095, in <module> main() File "run_qa.py", line 902, in main parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) File "/home/spacemanidol/miniconda3/envs/sparseml/lib/python3.7/site-packages/transformers/hf_argparser.py", line 52, in __init__ self._add_dataclass_arguments(dtype) File "/home/spacemanidol/miniconda3/envs/sparseml/lib/python3.7/site-packages/transformers/hf_argparser.py", line 93, in _add_dataclass_arguments elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List): File "/home/spacemanidol/miniconda3/envs/sparseml/lib/python3.7/typing.py", line 721, in __subclasscheck__ return issubclass(cls, self.__origin__) TypeError: issubclass() arg 1 must be a clas ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Run and produce a BERT-QA model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10618/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10617
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10617/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10617/comments
https://api.github.com/repos/huggingface/transformers/issues/10617/events
https://github.com/huggingface/transformers/issues/10617
826,723,391
MDU6SXNzdWU4MjY3MjMzOTE=
10,617
Request: Ignore Dataset transforms when iterating to the most recent checkpoint when resuming training
{ "login": "jncasey", "id": 31020859, "node_id": "MDQ6VXNlcjMxMDIwODU5", "avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jncasey", "html_url": "https://github.com/jncasey", "followers_url": "https://api.github.com/users/jncasey/followers", "following_url": "https://api.github.com/users/jncasey/following{/other_user}", "gists_url": "https://api.github.com/users/jncasey/gists{/gist_id}", "starred_url": "https://api.github.com/users/jncasey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jncasey/subscriptions", "organizations_url": "https://api.github.com/users/jncasey/orgs", "repos_url": "https://api.github.com/users/jncasey/repos", "events_url": "https://api.github.com/users/jncasey/events{/privacy}", "received_events_url": "https://api.github.com/users/jncasey/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is already there :-) Just pass along `--ignore_data_skip` in your script or `ignore_data_skip=True` in your `TrainingArguments`.", "Wow, that was fast! :) \r\n\r\nThat loads the model from the checkpoint and advances the dataset to the next sample that would have been trained in the original run?\r\n\r\nFrom my reading of the code I assumed that it reloaded the model and started the training over at the first sample of the dataset.\r\n", "Ah sorry I misunderstood your feature request. Indeed it starts from the first sample instead of iterating. What you ask is a bit more complicated and will require a minimum version of datasets. It's possible but will take some time.", "Sorry for the confusion from my writing! And thanks, as always, for your work on this amazing project.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
CONTRIBUTOR
null
# ๐Ÿš€ Feature request It'd be great if, when resuming training from a checkpoint and using a Dataset with a format/transform function applied, the dataset's format/transform function could be ignored while iterating up to the last checkpoint step. @lhoestq @sgugger ## Motivation I doubt it's much of an issue most of the time, but I've started playing with `dataset.set_transform()` for doing some heavy preprocessing, and just iterating through samples to the current checkpoint step can take a ridiculously long time compared to a dataset without a transform applied. And I don't think there's any case where the transformed sample would be used, right? See [this conversation in the forum](https://discuss.huggingface.co/t/understanding-set-transform/3740/6?u=jncasey) for more backstory and my rudimentary thoughts on how I'd accomplish it. ## Your contribution I'm hesitant to try updating any of the trainer code myself since it's so complicated, and needs to cover so many edge cases I'm not familiar with.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10617/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10616
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10616/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10616/comments
https://api.github.com/repos/huggingface/transformers/issues/10616/events
https://github.com/huggingface/transformers/issues/10616
826,717,478
MDU6SXNzdWU4MjY3MTc0Nzg=
10,616
changing ".view()" to ".reshape()" for pytorch
{ "login": "KaiQiangSong", "id": 9112038, "node_id": "MDQ6VXNlcjkxMTIwMzg=", "avatar_url": "https://avatars.githubusercontent.com/u/9112038?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KaiQiangSong", "html_url": "https://github.com/KaiQiangSong", "followers_url": "https://api.github.com/users/KaiQiangSong/followers", "following_url": "https://api.github.com/users/KaiQiangSong/following{/other_user}", "gists_url": "https://api.github.com/users/KaiQiangSong/gists{/gist_id}", "starred_url": "https://api.github.com/users/KaiQiangSong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KaiQiangSong/subscriptions", "organizations_url": "https://api.github.com/users/KaiQiangSong/orgs", "repos_url": "https://api.github.com/users/KaiQiangSong/repos", "events_url": "https://api.github.com/users/KaiQiangSong/events{/privacy}", "received_events_url": "https://api.github.com/users/KaiQiangSong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
New Version of PyTorch uses ".reshape()" instead of ".view()". There might be some issue if still using ".view()".
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10616/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10615
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10615/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10615/comments
https://api.github.com/repos/huggingface/transformers/issues/10615/events
https://github.com/huggingface/transformers/pull/10615
826,655,485
MDExOlB1bGxSZXF1ZXN0NTg4NjQwMTU4
10,615
Fix tests of TrainerCallback
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? When introducing the `report_to` argument, I must have messed something up. Bottomline is that the tests of `TrainerCallback` can fail depending on what is installed in the env (TensorBoard for instance), this PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10615/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10615", "html_url": "https://github.com/huggingface/transformers/pull/10615", "diff_url": "https://github.com/huggingface/transformers/pull/10615.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10615.patch", "merged_at": 1615325132000 }
https://api.github.com/repos/huggingface/transformers/issues/10614
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10614/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10614/comments
https://api.github.com/repos/huggingface/transformers/issues/10614/events
https://github.com/huggingface/transformers/issues/10614
826,541,541
MDU6SXNzdWU4MjY1NDE1NDE=
10,614
Not able to convert T5 tf checkpoints
{ "login": "RachitBansal", "id": 18123052, "node_id": "MDQ6VXNlcjE4MTIzMDUy", "avatar_url": "https://avatars.githubusercontent.com/u/18123052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RachitBansal", "html_url": "https://github.com/RachitBansal", "followers_url": "https://api.github.com/users/RachitBansal/followers", "following_url": "https://api.github.com/users/RachitBansal/following{/other_user}", "gists_url": "https://api.github.com/users/RachitBansal/gists{/gist_id}", "starred_url": "https://api.github.com/users/RachitBansal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RachitBansal/subscriptions", "organizations_url": "https://api.github.com/users/RachitBansal/orgs", "repos_url": "https://api.github.com/users/RachitBansal/repos", "events_url": "https://api.github.com/users/RachitBansal/events{/privacy}", "received_events_url": "https://api.github.com/users/RachitBansal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems that T5 was added to the conversion scripts two months ago in https://github.com/huggingface/transformers/pull/9654 as you've mentioned.\r\n\r\nHowever, in your error there is no mention of \"t5\":\r\n```\r\nValueError: --model_type should be selected in the list [bert, gpt, gpt2, transfo_xl, xlnet, xlm]\r\n```\r\n\r\nBut on master it clearly shows there should be one:\r\n```\r\n\"--model_type should be selected in the list [bert, gpt, gpt2, t5, transfo_xl, xlnet, xlm, lxmert]\"\r\n```\r\n\r\nAre you certain you're launching the command in the correct environment? Could you share the result of `transformers-cli env`?\r\n\r\n---\r\n\r\nAlso, this conversion command is far from being complete, I think it could use some of your templating magic @sgugger if you ever feel like it :) \r\nIt isn't high priority as we already have the model-specific conversion scripts.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
Hi, I was trying to convert some tf checkpoints for T5 into PyTorch using ```transformers-cli convert```, and am getting the following error: > Traceback (most recent call last): File "/home/william18026/miniconda3/bin/transformers-cli", line 32, in <module> service.run() File "/home/william18026/miniconda3/lib/python3.7/site-packages/transformers/commands/convert.py", line 158, in run raise ValueError("--model_type should be selected in the list [bert, gpt, gpt2, transfo_xl, xlnet, xlm]") ValueError: --model_type should be selected in the list [bert, gpt, gpt2, transfo_xl, xlnet, xlm] My initial attempt was with transformers==4.3.3, but then I also tried using the source version (4.4.0dev) and the editable clone, but got the same error with all. T5 seems to have been added to the pipeline in #9654, but for some reason, it's not working at the user (/my) end. What could I be doing wrong?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10614/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10614/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10613
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10613/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10613/comments
https://api.github.com/repos/huggingface/transformers/issues/10613/events
https://github.com/huggingface/transformers/issues/10613
826,298,015
MDU6SXNzdWU4MjYyOTgwMTU=
10,613
OOM issues with save_pretrained models
{ "login": "pablogranolabar", "id": 60016311, "node_id": "MDQ6VXNlcjYwMDE2MzEx", "avatar_url": "https://avatars.githubusercontent.com/u/60016311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pablogranolabar", "html_url": "https://github.com/pablogranolabar", "followers_url": "https://api.github.com/users/pablogranolabar/followers", "following_url": "https://api.github.com/users/pablogranolabar/following{/other_user}", "gists_url": "https://api.github.com/users/pablogranolabar/gists{/gist_id}", "starred_url": "https://api.github.com/users/pablogranolabar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pablogranolabar/subscriptions", "organizations_url": "https://api.github.com/users/pablogranolabar/orgs", "repos_url": "https://api.github.com/users/pablogranolabar/repos", "events_url": "https://api.github.com/users/pablogranolabar/events{/privacy}", "received_events_url": "https://api.github.com/users/pablogranolabar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "This is a pretty big deal one would think. An almost 100% bloat of the model checkpoint when exporting compared to the model card...?", "This is impacting me as well, is it possible for us to reopen. I am happy to provide more relevant deatils", "This seems to be a duplicate of https://github.com/huggingface/transformers/issues/11222" ]
1,615
1,628
1,619
NONE
null
Posted this issue to the HuggingFace forums without a response. Having a weird issue with DialoGPT Large model deployment. From PyTorch 1.8.0 and Transformers 4.3.3 using model.save_pretrained and tokenizer.save_pretrained, the exported pytorch_model.bin is almost twice the size of the model card repo and results in OOM on a reasonably equipped machine that when using the standard transformers download process it works fine (I am building a CI pipeline to containerize the model hence the pre-populated model requirement): ``` Model card: pytorch_model.bin 1.6GB model.save_pretrained and tokenizer.save_pretrained: -rw-r--r-- 1 jrandel jrandel 800 Mar 6 16:51 config.json -rw-r--r-- 1 jrandel jrandel 446K Mar 6 16:51 merges.txt -rw-r--r-- 1 jrandel jrandel 3.0G Mar 6 16:51 pytorch_model.bin -rw-r--r-- 1 jrandel jrandel 357 Mar 6 16:51 special_tokens_map.json -rw-r--r-- 1 jrandel jrandel 580 Mar 6 16:51 tokenizer_config.json -rw-r--r-- 1 jrandel jrandel 780K Mar 6 16:51 vocab.json ``` When I download the model card files directly however, Iโ€™m getting the following errors: ``` curl -L https://huggingface.co/microsoft/DialoGPT-large/resolve/main/config.json -o ./model/config.json curl -L https://huggingface.co/microsoft/DialoGPT-large/resolve/main/pytorch_model.bin -o ./model/pytorch_model.bin curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/tokenizer_config.json -o ./model/tokenizer_config.json curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/config.json -o ./model/config.json curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/merges.txt -o ./model/merges.txt curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/special_tokens_map.json -o ./model/special_tokens_map.json curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/vocab.json -o ./model/vocab.json <snip> tokenizer = AutoTokenizer.from_pretrained("model/") File "/var/lang/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 395, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/var/lang/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1788, in from_pretrained return cls._from_pretrained( File "/var/lang/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1801, in _from_pretrained slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained( File "/var/lang/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1876, in _from_pretrained special_tokens_map = json.load(special_tokens_map_handle) File "/var/lang/lib/python3.8/json/__init__.py", line 293, in load return loads(fp.read(), File "/var/lang/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "/var/lang/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/var/lang/lib/python3.8/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/var/runtime/bootstrap.py", line 481, in <module> main() File "/var/runtime/bootstrap.py", line 458, in main lambda_runtime_client.post_init_error(to_json(error_result)) File "/var/runtime/lambda_runtime_client.py", line 42, in post_init_error response = runtime_connection.getresponse() File "/var/lang/lib/python3.8/http/client.py", line 1347, in getresponse response.begin() File "/var/lang/lib/python3.8/http/client.py", line 307, in begin version, status, reason = self._read_status() File "/var/lang/lib/python3.8/http/client.py", line 276, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response time="2021-03-08T09:01:39.33" level=warning msg="First fatal error stored in appctx: Runtime.ExitError" time="2021-03-08T09:01:39.33" level=warning msg="Process 14(bootstrap) exited: Runtime exited with error: exit status 1" time="2021-03-08T09:01:39.33" level=error msg="Init failed" InvokeID= error="Runtime exited with error: exit status 1" time="2021-03-08T09:01:39.33" level=warning msg="Failed to send default error response: ErrInvalidInvokeID" time="2021-03-08T09:01:39.33" level=error msg="INIT DONE failed: Runtime.ExitError" time="2021-03-08T09:01:39.33" level=warning msg="Reset initiated: ReserveFail" ``` So what would be causing the large file variance between save_pretrained models and the model card repo? And any ideas why the directly downloaded model card files arenโ€™t working in this example? Thanks in advance
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10613/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10612
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10612/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10612/comments
https://api.github.com/repos/huggingface/transformers/issues/10612/events
https://github.com/huggingface/transformers/issues/10612
826,204,473
MDU6SXNzdWU4MjYyMDQ0NzM=
10,612
Implementing efficient self attention in T5
{ "login": "JamesDeAntonis", "id": 33379057, "node_id": "MDQ6VXNlcjMzMzc5MDU3", "avatar_url": "https://avatars.githubusercontent.com/u/33379057?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JamesDeAntonis", "html_url": "https://github.com/JamesDeAntonis", "followers_url": "https://api.github.com/users/JamesDeAntonis/followers", "following_url": "https://api.github.com/users/JamesDeAntonis/following{/other_user}", "gists_url": "https://api.github.com/users/JamesDeAntonis/gists{/gist_id}", "starred_url": "https://api.github.com/users/JamesDeAntonis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JamesDeAntonis/subscriptions", "organizations_url": "https://api.github.com/users/JamesDeAntonis/orgs", "repos_url": "https://api.github.com/users/JamesDeAntonis/repos", "events_url": "https://api.github.com/users/JamesDeAntonis/events{/privacy}", "received_events_url": "https://api.github.com/users/JamesDeAntonis/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "There are already some PRs regarding these models, I'm working on adding the Linformer (#10587), there's also a PR for the Performer (#9325, see further down the thread - people can already train T5 with Performer). " ]
1,615
1,615
null
CONTRIBUTOR
null
# ๐ŸŒŸ New model addition My teammates and I (including @ice-americano) would like to use efficient self attention methods such as Linformer, Performer and Nystromformer ## Model description These new methods serve as approximations of regular attention, but reduce complexity from quadratic in the inputs to linear. We would like to add a parameter to T5 where users can specify an efficient attention method to use instead of regular attention. Ideally, this would be implemented across all models, but the models tend to have varying implementations of attention, rendering this generalization fairly tedious. ## Open source status * [x] the model implementation is available: repos are https://github.com/mlpen and https://github.com/lucidrains/performer-pytorch * [ ] the model weights are available: N/A * [x] who are the authors: @mlpen and @lucidrains
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10612/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10612/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/10611
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10611/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10611/comments
https://api.github.com/repos/huggingface/transformers/issues/10611/events
https://github.com/huggingface/transformers/pull/10611
826,154,820
MDExOlB1bGxSZXF1ZXN0NTg4MTc4ODkz
10,611
split seq2seq script into summarization & translation
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @stas00 `run_seq2seq` is left as is for now. At some point, if your tests migrate from that script to either the new ones or another one, we will remove it.", "> cc @stas00 `run_seq2seq` is left as is for now. At some point, if your tests migrate from that script to either the new ones or another one, we will remove it.\r\n\r\nI know we discussed to potentially leave the all-in-one script for performance testing, but it's very likely we will be using a different approach that Morgan created.\r\n\r\nTherefore please don't leave this on me - please sync the tests with these changes and remove the do-it-all script. Thank you.\r\n", "May I suggest that the examples are inconsistent script naming-wise, some are very abbreviated `run_clm.py`, others are the extreme opposite `run_summarization.py` - that's a way too much to type - won't `run_sum.py` be sufficient?\r\n\r\nFor \"to type\" I mean when referring to them in documents, Issues, etc. There is no file-completion there.", "> May I suggest that the examples are inconsistent with script naming-wise, some are very abbreviated `run_clm.py`, others are the extreme opposite `run_summarization.py` - that's a way too much to type - won't `run_sum.py` be sufficient?\r\n\r\nmy personal preference goes to clarity so `run_summarization.py` trumps `run_sum.py` - but consistency is also important - unsure about the best tradeoff here\r\n\r\nedit: if we were to shorten the script names, what would be the matching acronym for `run_translation.py`?", "`run_trans.py`\r\n\r\nand just to clarify, I'm just flagging the inconsistency and my preference to type less, and in no way suggesting to interfere with this process - if most of you prefer the long names - go for it. ", "Also while we are at it or perhaps after the split - different PR, this further proposed improvement could be applied:\r\nhttps://github.com/huggingface/transformers/issues/10164\r\n", "I'm not seeing the benefit of shortening to `run_sum` and `run_trans` when tab-complete will give you the full name. In the language-modeling folder, the acronym was used because `run_causal_language_modeling`/`run_masked_language_modeling` is really long, same for question-answering.\r\n\r\nIf summarization and translation took two words at least, we could use the acronym to shorten the name, but since they don't I would go for the full name.", "> I'm not seeing the benefit of shortening to `run_sum` and `run_trans` when tab-complete will give you the full name. In the language-modeling folder, the acronym was used because `run_causal_language_modeling`/`run_masked_language_modeling` is really long, same for question-answering.\r\n> \r\n> If summarization and translation took two words at least, we could use the acronym to shorten the name, but since they don't I would go for the full name.\r\n\r\nI can't see how one is much longer than the other:\r\n```\r\nrun_causal_language_modeling\r\nrun_summarization\r\n```\r\n\r\nWould `run_causal_lm` and `run_masked_lm` be perhaps a good middle ground if you want example names to be of the explicit type?\r\n\r\n", "> I'm not seeing the benefit of shortening to `run_sum` and `run_trans` when tab-complete will give you the full name. In the language-modeling folder, the acronym was used because `run_causal_language_modeling`/`run_masked_language_modeling` is really long, same for question-answering.\r\n> \r\n> If summarization and translation took two words at least, we could use the acronym to shorten the name, but since they don't I would go for the full name.\r\n\r\nMy current understanding is that consistency primes for now, and the change for more explicit script names should be done in a separate PR, if there is indeed consensus that explicit names are better here.", "Very nice PR overall! \r\n\r\nI don't really agree though with the `sum`, `trans` and the \"one-size-fits-it-all\" design choices. Maybe we can settle on displaying a warning for T5 @stas00 ?", "> I don't really agree though with the `sum`, `trans` \r\n\r\nAs long as other examples are consistently named then it works.\r\n\r\n> and the \"one-size-fits-it-all\" design choices. \r\n\r\nIt doesn't sound like we are reaching a consensus here. I outlined that entering the same data more than once in the same input is error-prone. Perhaps there is another way to fix this w/o \"one-size-fits-it-all\" \r\n\r\n> Maybe we can settle on displaying a warning for T5 @stas00 ?\r\n\r\nYes, please. At the very least.", "@theo-m, @stas00 - if it's fine for you maybe we can change the script names to `run_summarization.py` and `run_translation.py` then and replace the automatic setting of `prefix` for T5 with a warning instead. Would that work? I'm more than happy to merge this PR then", "If the group prefers it this way then it is fine as you propose.", "> If the group prefers it this way then it is fine as you propose.\r\n\r\nOkey, maybe @sgugger, @patil-suraj and @LysandreJik can give their final word then as well", "I would prefer explicit names (`run_summarization.py` and `run_translation.py`) and not handling T5 prefixes automatically.", "I have already given my opinion on the names. For the `source_prefix` I have no strong opinion as I don't see any \"good\" solution sadly. I'm fine with the warning.", "Seems to me this could be merged, last call for maintainers @patrickvonplaten @sgugger @LysandreJik (thumbs up on this message will be interpreted as a go ๐Ÿ˜‰ )" ]
1,615
1,615
1,615
CONTRIBUTOR
null
keeping the original script for tests cc @stas00 fix #10164
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10611/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10611", "html_url": "https://github.com/huggingface/transformers/pull/10611", "diff_url": "https://github.com/huggingface/transformers/pull/10611.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10611.patch", "merged_at": 1615813903000 }
https://api.github.com/repos/huggingface/transformers/issues/10610
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10610/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10610/comments
https://api.github.com/repos/huggingface/transformers/issues/10610/events
https://github.com/huggingface/transformers/pull/10610
826,120,770
MDExOlB1bGxSZXF1ZXN0NTg4MTQ4NTE0
10,610
Trigger add sm information
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,616
1,615
MEMBER
null
The PR adds functionality to identify more telemetry information when training is run.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10610/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10610", "html_url": "https://github.com/huggingface/transformers/pull/10610", "diff_url": "https://github.com/huggingface/transformers/pull/10610.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10610.patch", "merged_at": 1615307506000 }
https://api.github.com/repos/huggingface/transformers/issues/10609
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10609/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10609/comments
https://api.github.com/repos/huggingface/transformers/issues/10609/events
https://github.com/huggingface/transformers/issues/10609
825,909,479
MDU6SXNzdWU4MjU5MDk0Nzk=
10,609
SortedDL for contiguous LM
{ "login": "simonschoe", "id": 53626067, "node_id": "MDQ6VXNlcjUzNjI2MDY3", "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonschoe", "html_url": "https://github.com/simonschoe", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "repos_url": "https://api.github.com/users/simonschoe/repos", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue is more suited for the forum, but maybe @sgugger has some hints to share!", "I'm not sure what you would want to sort a single text stream. The `Trainer` supports `--group_by_length` but that's when you have multiple texts.\r\n\r\nNote that `DataCollatorForLanguageModeling` only performs random masking on your prepared data, nothing more.", "Sorry, maybe I was imprecise here (still trying to wrap my head around a lot of these new concepts). Essentially, what I was wondering if batches for language modeling are constructed in a way that is similar to the approach that you described in [Chapter 10: \r\nNLP Deep Dive](https://github.com/fastai/fastbook/blob/master/10_nlp.ipynb) (section 'Putting Our Texts into Batches for a Language Model') of the book you co-authored. If I got it correctly, that would allow me to train the transformer over variable length input sequences without worrying about sequences being truncated due to the constraints imposed by `tokenizer.max_len_single_sentence` (overflowing parts would simply end up at the appropriate position in the nex mini-batch).", "I think you may be referring to the LM DataLoader. This kind of preprocessing is done using the `datasets` library on our side. Take a look at the [run_clm](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) or [run_mlm](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) examples (in run_mlm the part that is not in the block \"line_by_line\") or the [language modeling notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) to see how.", "Thanks a lot! Indeed, the `group_text()` function was exactly what I was looking for. ", "Hey @sgugger,\r\n\r\nsorry for reopening this one. I am still not 100% sure if I conveyed my issue properly the first time. Hence, let me briefly restate it using an example:\r\n\r\nLet's say I chunk my text stream of length 100 (let's say including 3 sentences) into blocks of `block_size=5`, resulting in 20 blocks. Now, I'd like to feed them into my model using a `batch_size=5`, resulting in 4 batches ร  5 text blocks.\r\n\r\nI am still not 100% sure how they are fed into the model using the `DataCollatorForLanguageModeling` and `Trainer` API:\r\n```python\r\n# Variant A\r\n\r\nbatch_1 = [block1, block2, block3, block4, block5]\r\n# ...\r\nbatch_4 = [block16, block17, block18, block19, block20]\r\n``` \r\n\r\n```python\r\n# Variant B\r\n\r\nbatch_1 = [block1, block5, block9, block13, block17]\r\nbatch_2 = [block2, block6, block10, block14, block18]\r\n# ...\r\nbatch_4 = [block4, block8, block12, block16, block20]\r\n``` \r\n\r\nIf I got it correctly, the method presented in the book referred to in the previous comment relates to *variant B*.", "You will need to write your own data collator for that as this is not in the Transformers library: contrary to LSTMs, Transformers do not have a state so we don't care about the ordering across batches for those models.", "This makes entirely sense, thanks for lifting this barrier in my head! " ]
1,615
1,616
1,616
NONE
null
Hi there, I am currently implementing LM re-training of a RoBERTa model using the `Trainer` API. Since I have a huge training corpus, I was wondering if there is a functionality in the `Trainer` or the corresponding `DataCollatorForLanguageModeling` that allows for sorted batching as in `fastai`? More precisely, I would like to feed in all my training data as a contiguous text stream and let the respective functions handle sorted batching irrespective of the sequence length of the individual sequences. Best, Simon
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10609/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10608
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10608/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10608/comments
https://api.github.com/repos/huggingface/transformers/issues/10608/events
https://github.com/huggingface/transformers/pull/10608
825,886,530
MDExOlB1bGxSZXF1ZXN0NTg3OTM4MzYx
10,608
Image feature extractor design
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
# What does this PR do? Here we can discuss how to design the `ImageFeatureExtractor` class, and the `ViTFeatureExtractor` subclass. The hierarchy looks as follows: `FeatureExtractorMixin` -> `ImageFeatureExtractor` -> `ViTFeatureExtractor`. The `FeatureExtractorMixin` defines common properties among `SequenceFeatureExtractors` (for speech recognition) and `ImageFeatureExtractors` (for vision related tasks), namely saving utilities and the general `BatchFeature` class. Notes: - `ImageFeatureExtractor` is based on `SequenceFeatureExtractor`, but with some changes: renamed `max_length` to `max_resolution`, renamed `PaddingStrategy.LONGEST` to `PaddingStrategy.LARGEST` (to pad to the resolution of the largest image in a batch), renamed `PaddingStrategy.MAX_LENGTH` to `PaddingStrategy.MAX_RESOLUTION`. - Currently, this `ImageFeatureExtractor` class defines common properties among feature extractors for vision models, which are now `image_mean`, `image_std` and `padding_value`. Each concrete FeatureExtractor then provides values for these 3 attributes, and defines any additional attributes. - Currently, the `ImageFeatureExtractor` class only defines `pad` and `_pad` methods (which should be updated to work for 2D images), but I guess we can add general image transformation methods (such as resize, normalize), and maybe also a `__call__` method. These are now all defined in `ViTFeatureExtractor`. ## Who can review? @patrickvonplaten @patil-suraj @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10608/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10608", "html_url": "https://github.com/huggingface/transformers/pull/10608", "diff_url": "https://github.com/huggingface/transformers/pull/10608.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10608.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10607
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10607/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10607/comments
https://api.github.com/repos/huggingface/transformers/issues/10607/events
https://github.com/huggingface/transformers/issues/10607
825,722,416
MDU6SXNzdWU4MjU3MjI0MTY=
10,607
Can't load config for hosted model, works when downloaded
{ "login": "MatejUlcar", "id": 26550612, "node_id": "MDQ6VXNlcjI2NTUwNjEy", "avatar_url": "https://avatars.githubusercontent.com/u/26550612?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MatejUlcar", "html_url": "https://github.com/MatejUlcar", "followers_url": "https://api.github.com/users/MatejUlcar/followers", "following_url": "https://api.github.com/users/MatejUlcar/following{/other_user}", "gists_url": "https://api.github.com/users/MatejUlcar/gists{/gist_id}", "starred_url": "https://api.github.com/users/MatejUlcar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatejUlcar/subscriptions", "organizations_url": "https://api.github.com/users/MatejUlcar/orgs", "repos_url": "https://api.github.com/users/MatejUlcar/repos", "events_url": "https://api.github.com/users/MatejUlcar/events{/privacy}", "received_events_url": "https://api.github.com/users/MatejUlcar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! I can do `model = AutoModelForMaskedLM.from_pretrained(\"EMBEDDIA/sloberta\")` without any issues on my end.\r\n\r\nCould it be linked to a connection issue?\r\n\r\n```py\r\n>>> from transformers import AutoModelForMaskedLM\r\n>>> model = AutoModelForMaskedLM.from_pretrained(\"EMBEDDIA/sloberta\")\r\nDownloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 520/520 [00:00<00:00, 215kB/s]\r\nDownloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 443M/443M [00:38<00:00, 11.5MB/s]\r\n\r\n```", "Tested from two computers on two different networks, didn't work. Managed to load it on Google Colab, though.\r\n\r\nAnyway, it does work now. I was foiled by conda loading an old version of transformers instead of a newer one. Thanks!" ]
1,615
1,615
1,615
NONE
null
I have recently (19 hours ago) uploaded a new model to huggingface: https://huggingface.co/EMBEDDIA/sloberta When attempting to load it with `model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/sloberta")` I get the following error: ``` Traceback (most recent call last): File "/home/mulcar/.conda/envs/transformerslatest/lib/python3.8/site-packages/transformers/configuration_utils.py", line 353, in get_config_dict raise EnvironmentError OSError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/mulcar/.conda/envs/transformerslatest/lib/python3.8/site-packages/transformers/modeling_auto.py", line 1105, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/home/mulcar/.conda/envs/transformerslatest/lib/python3.8/site-packages/transformers/configuration_auto.py", line 272, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/mulcar/.conda/envs/transformerslatest/lib/python3.8/site-packages/transformers/configuration_utils.py", line 362, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for 'EMBEDDIA/sloberta'. Make sure that: - 'EMBEDDIA/sloberta' is a correct model identifier listed on 'https://huggingface.co/models' - or 'EMBEDDIA/sloberta' is the correct path to a directory containing a config.json file ``` If I download the model from the huggingface.co, eg. cloning the model's repo, it loads perfectly fine. Is there a waiting time before a new model is completely added to the system? Or is there an other issue going on?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10607/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10606
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10606/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10606/comments
https://api.github.com/repos/huggingface/transformers/issues/10606/events
https://github.com/huggingface/transformers/pull/10606
825,568,203
MDExOlB1bGxSZXF1ZXN0NTg3NjQ5NzU3
10,606
[M2M100] remove final_logits_bias
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
# What does this PR do? M2M100 does not need `final_logits_bias`, this PR removes it from the `M2M100ForConditionalGeneration`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10606/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10606", "html_url": "https://github.com/huggingface/transformers/pull/10606", "diff_url": "https://github.com/huggingface/transformers/pull/10606.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10606.patch", "merged_at": 1615350151000 }
https://api.github.com/repos/huggingface/transformers/issues/10605
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10605/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10605/comments
https://api.github.com/repos/huggingface/transformers/issues/10605/events
https://github.com/huggingface/transformers/pull/10605
825,562,298
MDExOlB1bGxSZXF1ZXN0NTg3NjQ0NDU3
10,605
Fix cross-attention head mask for Torch encoder-decoder models
{ "login": "stancld", "id": 46073029, "node_id": "MDQ6VXNlcjQ2MDczMDI5", "avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stancld", "html_url": "https://github.com/stancld", "followers_url": "https://api.github.com/users/stancld/followers", "following_url": "https://api.github.com/users/stancld/following{/other_user}", "gists_url": "https://api.github.com/users/stancld/gists{/gist_id}", "starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stancld/subscriptions", "organizations_url": "https://api.github.com/users/stancld/orgs", "repos_url": "https://api.github.com/users/stancld/repos", "events_url": "https://api.github.com/users/stancld/events{/privacy}", "received_events_url": "https://api.github.com/users/stancld/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hi @patrickvonplaten & @patil-suraj, my PR does not pass one test, however, I am not able to reproduce this error on my local (I can't even find the file at `src/transformers/models/new_enc_dec/modeling_new_enc_dec.py` in the repo, which is the one where should be a problem with a copy inconsistency)", "Hi @stancld thank you for your work! The issue is because you have updated all model files (thank you!!), but you haven't updated the template. The template is used when adding a new model, it's available [here](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model).\r\n\r\nFor example, these lines should probably be updated to include the `cross_head_mask`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/44f64132a5f50726f9de4467ed745421c3b11ab3/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/modeling_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py#L2070-L2085", "@LysandreJik - Thank you very much for the clarification :)", "Wonderful, the tests pass! Thanks for handling the templates.", "@patil-suraj Thank you for your review and the suggestions. I agree with the change in variable naming. The new one is more accurate. I will change it everywhere accordingly :) ", "Hi @patrickvonplaten, I've just rebased this branch to the current master to keep this PR up to date. Could you please review this one? Thanks a lot! :) ", "Hey @stancld, Patrick is off for a couple of weeks but will take a look at this as soon as he's back :)", "The PR looks good to me :-) \r\n\r\nThink we just need to fix the docstring. @stancld - let me know if you need help regarding the docstring", "@patrickvonplaten The docstring should be fixed now. I forgot a line from a conflict there.. :)" ]
1,615
1,619
1,619
CONTRIBUTOR
null
1. This PR fixes head masking for the cross-attention module in the following models: - BART, - Blenderbot, - Blenderbot_small, - FSMT, - LED, - M2M_100, - Marian, - MBart, - Pegasus. - T5 2. This PR also contains slight changes in docstrings so that it will be clear that `head_mask` is related to the config of an encoder and the shape of `decoder_head_mask` and `cross_head_mask` depends on the config of a decoder. 3. This PR enables `test_headmasking` for M2M_100 model. <hr> **Reviewers:** @patrickvonplaten @patil-suraj <hr> Fixes: #10540
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10605/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10605", "html_url": "https://github.com/huggingface/transformers/pull/10605", "diff_url": "https://github.com/huggingface/transformers/pull/10605.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10605.patch", "merged_at": 1619197087000 }
https://api.github.com/repos/huggingface/transformers/issues/10604
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10604/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10604/comments
https://api.github.com/repos/huggingface/transformers/issues/10604/events
https://github.com/huggingface/transformers/pull/10604
825,451,258
MDExOlB1bGxSZXF1ZXN0NTg3NTQ0MTI0
10,604
fix flaky m2m100 test
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
# What does this PR do? The `test_retain_grad_hidden_states_attentions` test is sometimes failing for `M2M100`, with the error ` AttributeError: 'NoneType' object has no attribute 'retain_grad'` This is because of `layerdrop` sometimes a layer is skipped and the `encoder_attenion/decoder_attentions/cross_attentions` can be `None`. This PR sets the `config.encoder_layerdrop` and `config.decoder_layerdrop` to 0 in tests to make the tests deterministic.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10604/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10604", "html_url": "https://github.com/huggingface/transformers/pull/10604", "diff_url": "https://github.com/huggingface/transformers/pull/10604.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10604.patch", "merged_at": 1615300508000 }
https://api.github.com/repos/huggingface/transformers/issues/10603
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10603/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10603/comments
https://api.github.com/repos/huggingface/transformers/issues/10603/events
https://github.com/huggingface/transformers/issues/10603
825,414,757
MDU6SXNzdWU4MjU0MTQ3NTc=
10,603
AlbertForSequenceClassification random output
{ "login": "Zjq9409", "id": 62974595, "node_id": "MDQ6VXNlcjYyOTc0NTk1", "avatar_url": "https://avatars.githubusercontent.com/u/62974595?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zjq9409", "html_url": "https://github.com/Zjq9409", "followers_url": "https://api.github.com/users/Zjq9409/followers", "following_url": "https://api.github.com/users/Zjq9409/following{/other_user}", "gists_url": "https://api.github.com/users/Zjq9409/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zjq9409/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zjq9409/subscriptions", "organizations_url": "https://api.github.com/users/Zjq9409/orgs", "repos_url": "https://api.github.com/users/Zjq9409/repos", "events_url": "https://api.github.com/users/Zjq9409/events{/privacy}", "received_events_url": "https://api.github.com/users/Zjq9409/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! You're loading a model called `albert_chinese_base`; my guess is that this model only contains the base transformer model and not the sequence classification head that you need.\r\n\r\nDoes that make sense? You should use a model fine-tuned on sequence classification, and not a base model, if you want to do sequence classification. Of course, that model will be fine-tuned to a specific sequence classification task so it can't be used in any context.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
I use AlbertForSequenceClassification interface as follows: `import torch from transformers import BertTokenizer,AlbertConfig,AlbertForSequenceClassification import numpy pretrained = "./albert_chinese_base" tokenizer = BertTokenizer.from_pretrained(pretrained) config = AlbertConfig.from_json_file('./albert_chinese_base/config.json') config.output_hidden_states = True model = AlbertForSequenceClassification.from_pretrained(pretrained, config = config) inputtext = "ไปŠๅคฉๅฟƒๆƒ…ๆƒ…ๅพˆๅฅฝๅ•Š๏ผŒไนฐไบ†ๅพˆๅคšไธœ่ฅฟ๏ผŒๆˆ‘็‰นๅˆซๅ–œๆฌข๏ผŒ็ปˆไบŽๆœ‰ไบ†่‡ชๅทฑๅ–œๆฌข็š„็”ตๅญไบงๅ“๏ผŒ่ฟ™ๆฌกๆ€ป็ฎ—ๅฏไปฅๅฅฝๅฅฝๅญฆไน ไบ†" max_length = 128 tokenized_text=tokenizer.encode_plus(inputtext, add_special_tokens = True, # add [CLS], [SEP] max_length = max_length, # max length of the text that can go to BERT pad_to_max_length = True, # add [PAD] tokens return_attention_mask = True, # add attention mask to not focus on pad tokens return_tensors="pt") outputs=model(input_ids=tokenized_text["input_ids"], token_type_ids=tokenized_text["token_type_ids"], attention_mask=tokenized_text["attention_mask"]) print(outputs.logits)` but, when I run this code, an error occured as follows: `Some weights of the model checkpoint at ./albert_chinese_base were not used when initializing AlbertForSequenceClassification: ['predictions.bias', 'predictions.LayerNorm.weight', 'predictions.LayerNorm.bias', 'predictions.dense.weight', 'predictions.dense.bias', 'predictions.decoder.weight', 'predictions.decoder.bias'] -This IS expected if you are initializing AlbertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). -This IS NOT expected if you are initializing AlbertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of AlbertForSequenceClassification were not initialized from the model checkpoint at ./albert_chinese_base and are newly initialized: ['classifier.weight', 'classifier.bias']` And I print outputs.logits value, found that the value is different every time you run,the logits value like this: tensor([[0.3077, 0.1200]], grad_fn=) tensor([[-0.3245, -0.3117]], grad_fn=) so, I wonder AlbertForSequenceClassification model is not initialized correctly the value of ['classifier.weight', 'classifier.bias'] and random value every time I run, and BertForSequenceClassification result output is right,so how can I slove the problem?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10603/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10602
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10602/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10602/comments
https://api.github.com/repos/huggingface/transformers/issues/10602/events
https://github.com/huggingface/transformers/pull/10602
825,353,100
MDExOlB1bGxSZXF1ZXN0NTg3NDU3NzQz
10,602
[examples template] added max_sample args and metrics changes
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ya @stas00,\nActually I figured that and tested visually with the example tamplate not the model template" ]
1,615
1,615
1,615
CONTRIBUTOR
null
# What does this PR do? This PR adds the same as https://github.com/huggingface/transformers/pull/10551 and https://github.com/huggingface/transformers/pull/10436 to the cookie-cutter template. Fixes #10423 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## review: @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10602/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10602", "html_url": "https://github.com/huggingface/transformers/pull/10602", "diff_url": "https://github.com/huggingface/transformers/pull/10602.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10602.patch", "merged_at": 1615309616000 }
https://api.github.com/repos/huggingface/transformers/issues/10601
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10601/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10601/comments
https://api.github.com/repos/huggingface/transformers/issues/10601/events
https://github.com/huggingface/transformers/pull/10601
825,203,794
MDExOlB1bGxSZXF1ZXN0NTg3MzIwMDIy
10,601
Speedup tf tests
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This single change made the tests on the TF CI pass from 6+ hours (non slow) to 29/31 minutes: https://github.com/huggingface/transformers/actions/runs/635711849\r\n\r\nWill look for a way to reduce their time." ]
1,615
1,615
1,615
MEMBER
null
Fyi @sgugger @patrickvonplaten @stas00, I'm temporarily marking these tests as slow as they take more than 1h30 minutes and prevent the CI from completing, therefore preventing any relevant information from the TF tests. I'm working on improving the CI times so this is temporary (will revert by Friday).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10601/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10601/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10601", "html_url": "https://github.com/huggingface/transformers/pull/10601", "diff_url": "https://github.com/huggingface/transformers/pull/10601.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10601.patch", "merged_at": 1615257848000 }
https://api.github.com/repos/huggingface/transformers/issues/10600
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10600/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10600/comments
https://api.github.com/repos/huggingface/transformers/issues/10600/events
https://github.com/huggingface/transformers/pull/10600
825,155,678
MDExOlB1bGxSZXF1ZXN0NTg3Mjc3MzMz
10,600
[docs] How to solve "Title level inconsistent" sphinx error
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
This PR documents an easy solution to the "Title level inconsistent" puzzle when adding a new sub-section to an `.rst` doc. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10600/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10600", "html_url": "https://github.com/huggingface/transformers/pull/10600", "diff_url": "https://github.com/huggingface/transformers/pull/10600.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10600.patch", "merged_at": 1615263394000 }
https://api.github.com/repos/huggingface/transformers/issues/10599
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10599/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10599/comments
https://api.github.com/repos/huggingface/transformers/issues/10599/events
https://github.com/huggingface/transformers/pull/10599
825,067,039
MDExOlB1bGxSZXF1ZXN0NTg3MjAwMjAz
10,599
Pass encoder outputs into GenerationMixin
{ "login": "ymfa", "id": 6981180, "node_id": "MDQ6VXNlcjY5ODExODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6981180?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ymfa", "html_url": "https://github.com/ymfa", "followers_url": "https://api.github.com/users/ymfa/followers", "following_url": "https://api.github.com/users/ymfa/following{/other_user}", "gists_url": "https://api.github.com/users/ymfa/gists{/gist_id}", "starred_url": "https://api.github.com/users/ymfa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ymfa/subscriptions", "organizations_url": "https://api.github.com/users/ymfa/orgs", "repos_url": "https://api.github.com/users/ymfa/repos", "events_url": "https://api.github.com/users/ymfa/events{/privacy}", "received_events_url": "https://api.github.com/users/ymfa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ymfa , thanks a lot for the PR!\r\n\r\n- The `generate` method does allow you to pass `input_embeds` as a keyword argument (`**model_kwargs` arg). If `input_embeds` is passed then T5 or any other model will use those instead of `input_ids`. If you look at the `generate` method signature, you can see that `input_ids` is optional.\r\n\r\n- I think we can allow to pass `encoder_outputs` directly to `generate`. Pinging @patrickvonplaten ", "Hey @ymfa thanks for your PR and your thoughtful explanations in the description! \r\n\r\nThe design philosophy of `generate()` is that the 99% cases should be covered by `generate()` and for specific cases the sub-generated generate functions should be called directly as it is the case in the examples here:\r\nhttps://github.com/huggingface/transformers/blob/0d909f6bd8ca0bc1ec8f42e089b64b4fffc4d230/src/transformers/generation_utils.py#L1592\r\n\r\nAs you can see, you only need to add a couple more lines when directly using `beam_search(...)` instead `generate(...)`. Could this solve your use case?\r\n\r\n\r\n\r\n", "Hi @patil-suraj , thanks for your comment. \r\n\r\nI do realise that I can pass additional arguments via `**model_kwargs`. However, it doesn't work if I pass `inputs_embeds` instead of `input_ids`, because there are a number of places in `generate()` that depend on `input_ids`. \r\n\r\nSpecifically, this is the error I got (model is T5ForConditionalGeneration):\r\n```\r\n>>> model.generate(inputs_embeds=input_embeded)\r\nTraceback (most recent call last):\r\n...\r\nValueError: `bos_token_id` has to be defined when no `input_ids` are provided.\r\n```\r\n\r\nIf I do pass `bos_token_id` (which shouldn't be necessary), this is the error (it's because the `input_ids` are created from `bos_token_id` and passed to the encoder):\r\n```\r\n>>> model.generate(inputs_embeds=input_embeded, bos_token_id=0)\r\nTraceback (most recent call last):\r\n...\r\nValueError: You cannot specify both inputs and inputs_embeds at the same time\r\n```\r\n\r\nSo in the end, I am now using the method in this PR to achieve this purpose.", "Thanks @patrickvonplaten . \r\n\r\nTo be honest I haven't found a way to make `beam_search()` work even for the simple use case of passing ids only. In this example (model is T5ForConditionalGeneration), `input_ids_batch` is the just 5 identical `input_ids` stacked together for the beam size.\r\n```\r\n>>> input_ids_batch.shape\r\ntorch.Size([5, 15])\r\n>>> model.beam_search(input_ids_batch, beam_scorer)\r\nTraceback (most recent call last):\r\n...\r\n File \"...python3.7/site-packages/transformers/models/t5/modeling_t5.py\", line 871, in forward\r\n raise ValueError(f\"You have to specify either {err_msg_prefix}inputs or {err_msg_prefix}inputs_embeds\")\r\nValueError: You have to specify either inputs or inputs_embeds\r\n```\r\n\r\nWhen I pass the `encoder_outputs`:\r\n```\r\n>>> model.beam_search(decoder_input_ids_batch, beam_scorer, encoder_outputs=encoder_outputs)\r\nTraceback (most recent call last):\r\n...\r\n File \"...python3.7/site-packages/transformers/models/t5/modeling_t5.py\", line 498, in forward\r\n scores += position_bias\r\nRuntimeError: The size of tensor a (3) must match the size of tensor b (15) at non-singleton dimension 3\r\n```\r\n\r\nI don't think it would be a simpler PR if I change `beam_search()` instead of `generate()`. It would also be less general as it doesn't apply to the other decoding methods, and it would require the user to prepare `beam_scorer` and `decoder_input_ids_batch`. ", "> scores += position_bias\r\n\r\nI think you just need to broadcast your `decoder_input_ids_batch` to be of the same size as `encoder_outputs`, then it should work. \r\n\r\nOverall, I'm not really in favor of adding this new special use case. To me it's not naturally to do `model.generate(None, decoder_input_ids=..., encoder_outputs=encoder_outputs)` lot of people won't understand that the first argument has in fact to be `None` since it corresponds to the encoder input ids. For such a case it should be relatively easy to make it work with `beam_search` to be honest. The philosophy here is that that both `decoder_input_ids` and `encoder_outputs.last_hidden_state` have to be of the sample batch dimensions, so `encoder_outputs.shape][0] == decoder_input_ids.shape[0]` . Then the command is as simple as:\r\n\r\n```\r\nmodel.beam_search(decoder_input_ids, beam_scorer, encoder_outputs)\r\n```\r\n\r\nThe problem is that if we allow to many specific use cases for `generate()` the method becomes quite cluttered with if-statements again and I would like to avoid it. In this case, I think it's much cleaner to directly call the `beam_search` method tbh.", "What do you think @patil-suraj ?", "I've made a change so that the method signature of `generate()` remains the same as before. There is not \"clutterness\" added into this method now. This PR can be regarded as a fix, by properly handling the case when `encoder_outputs` is passed as one of the `model_kwargs`.\r\n\r\nYou're right that decoder_input_ids and encoder_outputs.last_hidden_state have to be of the sample batch dimensions. However, this means both of them need to be broadcast according to the beam size, which is not done automatically. A user would have to broadcast these tensors and objects manually in order to call `beam_search()`.\r\n\r\nTo be honest, I have been using the patch I submitted here for doing both beam search and sampling. The easy-to-use generation utility is one of my main reasons to choose the transformers package. I don't think it is a good idea to limit the potential based on \"cleanness.\" On the contrary, I even suggest refactoring `generate()` more systematically, so that `input_ids` is no longer used like a central \"currency\" in this method.", "Hey @ymfa, \r\n\r\nIt's actually a good point that changing between different generation methods while using `encoder_outputs` is not very user-friendly and \"pre-computing\" encoder_outputs is quite a common use case for all seq2seq models. Also since only the \"helper\" methods `_prepare...` are changed, I think I'm fine with the PR now! Thanks for being persistent here!\r\n\r\n@patil-suraj @LysandreJik - it would be nice you could take a look as well.", "I think `generate` now deserves its own doc page where we could explain this and maybe some more details like what the method supports, what are its limitations and what features won't be supported etc. It's changed significantly after the refactor and the design now follows a strict philosophy. It would be better to document that. What do you think @patrickvonplaten?\r\n\r\n@ymfa The PR is good to merge, I'll merge it once you add helpful comments in `_prepare_input_ids_for_generation` as suggested by Patrick and Lysandre." ]
1,615
1,615
1,615
CONTRIBUTOR
null
# What does this PR do? For encoder-decoder models such as T5, `GenerationMixin.generate()` currently runs both the encoder and the decoder. This PR allows one to pass already-computed `encoder_outputs` into this method, thus only the decoder will be run. The flexibility to skip the encoder in the generation utilities is useful for several different scenarios, such as: - The T5 encoder can encode `inputs_embeds` instead of `input_ids`. However, this is not possible within the generation utilities because only `input_ids` is accepted. With the changes in this PR, one can encode the `inputs_embeds` separately, and pass the encoder outputs to `generate()`. This is a partial solution to the issue https://github.com/huggingface/transformers/issues/6535. - In some applications, the same encoder outputs are reused in different decoding processes. It would be computationally efficient not having to recompute the encoder outputs. - This would also allow altering the encoder outputs for the purpose of incorporating additional information, etc. (In general, I think it is good practice to offer this option, where the encoding process is "decoupled" from the generation utilities.) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10599/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10599/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10599", "html_url": "https://github.com/huggingface/transformers/pull/10599", "diff_url": "https://github.com/huggingface/transformers/pull/10599.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10599.patch", "merged_at": 1615565591000 }
https://api.github.com/repos/huggingface/transformers/issues/10598
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10598/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10598/comments
https://api.github.com/repos/huggingface/transformers/issues/10598/events
https://github.com/huggingface/transformers/pull/10598
824,956,262
MDExOlB1bGxSZXF1ZXN0NTg3MTA2ODM3
10,598
Check layer types for Optimizer construction
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? As pointed out on the [forum](https://discuss.huggingface.co/t/parameter-groups-and-gpt2-layernorm/4239), `Trainer` currently excludes form weight decay layernorm layers by using a name pattern, which is not consistently followed by all models. This PR actual checks the layer types and adds some tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10598/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10598", "html_url": "https://github.com/huggingface/transformers/pull/10598", "diff_url": "https://github.com/huggingface/transformers/pull/10598.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10598.patch", "merged_at": 1615239612000 }
https://api.github.com/repos/huggingface/transformers/issues/10597
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10597/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10597/comments
https://api.github.com/repos/huggingface/transformers/issues/10597/events
https://github.com/huggingface/transformers/issues/10597
824,950,439
MDU6SXNzdWU4MjQ5NTA0Mzk=
10,597
No model card for roberta-large-finetuned-wsc
{ "login": "ngoquanghuy99", "id": 36761076, "node_id": "MDQ6VXNlcjM2NzYxMDc2", "avatar_url": "https://avatars.githubusercontent.com/u/36761076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ngoquanghuy99", "html_url": "https://github.com/ngoquanghuy99", "followers_url": "https://api.github.com/users/ngoquanghuy99/followers", "following_url": "https://api.github.com/users/ngoquanghuy99/following{/other_user}", "gists_url": "https://api.github.com/users/ngoquanghuy99/gists{/gist_id}", "starred_url": "https://api.github.com/users/ngoquanghuy99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ngoquanghuy99/subscriptions", "organizations_url": "https://api.github.com/users/ngoquanghuy99/orgs", "repos_url": "https://api.github.com/users/ngoquanghuy99/repos", "events_url": "https://api.github.com/users/ngoquanghuy99/events{/privacy}", "received_events_url": "https://api.github.com/users/ngoquanghuy99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Are you talking about this model? https://huggingface.co/mrm8488/roberta-large-finetuned-wsc", "> Are you talking about this model? https://huggingface.co/mrm8488/roberta-large-finetuned-wsc\r\n\r\nYes, this model i can not call by transformers. So how can i use it?", "You can't call it from transformers? Do you get an error?\r\nIt seems I can load it:\r\n\r\n```py\r\n>>> from transformers import AutoModelForMaskedLM\r\n>>> model = AutoModelForMaskedLM.from_pretrained(\"mrm8488/roberta-large-finetuned-wsc\")\r\n```", "> You can't call it from transformers? Do you get an error?\r\n> It seems I can load it:\r\n> \r\n> ```python\r\n> >>> from transformers import AutoModelForMaskedLM\r\n> >>> model = AutoModelForMaskedLM.from_pretrained(\"mrm8488/roberta-large-finetuned-wsc\")\r\n> ```\r\n\r\nI couldn't load it from \"AutoModel\". Thanks for your snippet! Anyways, should i finetune this for text classification task by removing Language Modeling head on top? Just for experiments!", "You can load it in a text-classification auto model in order to fine-tune it to text-classification:\r\n\r\n```py\r\n>>> from transformers import AutoModelForSequenceClassification\r\n... model = AutoModelForSequenceClassification.from_pretrained(\"mrm8488/roberta-large-finetuned-wsc\")\r\n```\r\n\r\nIt tells you that the LM head layers were discarded, and that it initialized randomly the layers for text-classification:\r\n\r\n```\r\nSome weights of the model checkpoint at mrm8488/roberta-large-finetuned-wsc were not used when initializing RobertaForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']\r\n- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of RobertaForSequenceClassification were not initialized from the model checkpoint at mrm8488/roberta-large-finetuned-wsc and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nYou should now fine-tune this on a text-classification dataset so that the randomly initialized layers may be trained!", "Oh wow, i don't fine tune for classification by this way. I directly removed the LM head though. But still thank you Lysandre!", "Happy to help!" ]
1,615
1,615
1,615
CONTRIBUTOR
null
# ๐Ÿš€ Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation I can not finetune this model `roberta-large-finetuned-wsc`, it doesn't have model card. ## Your contribution Please fix this!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10597/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10596
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10596/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10596/comments
https://api.github.com/repos/huggingface/transformers/issues/10596/events
https://github.com/huggingface/transformers/pull/10596
824,904,450
MDExOlB1bGxSZXF1ZXN0NTg3MDYyNzk5
10,596
Fairscale FSDP fix model save
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 2803958109, "node_id": "MDU6TGFiZWwyODAzOTU4MTA5", "url": "https://api.github.com/repos/huggingface/transformers/labels/fairscale", "name": "fairscale", "color": "A94273", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? This PR fixes the fact training with fairscale fully-sharded wrapper was hanging: it looks like recent changes in fairscale make a synchronization during the model state dict call, which results in the training hanging if we don't call that state_dict method on all processes. This PR addresses that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10596/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10596", "html_url": "https://github.com/huggingface/transformers/pull/10596", "diff_url": "https://github.com/huggingface/transformers/pull/10596.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10596.patch", "merged_at": 1615318927000 }
https://api.github.com/repos/huggingface/transformers/issues/10595
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10595/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10595/comments
https://api.github.com/repos/huggingface/transformers/issues/10595/events
https://github.com/huggingface/transformers/pull/10595
824,660,007
MDExOlB1bGxSZXF1ZXN0NTg2ODU3NzIw
10,595
Fix version control with anchors
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? In urls containing an anchor (such as https://huggingface.co/transformers/master/installation.html#caching-models), the version controller was not finding the right version (basically because the page wasn't ending with .html). This PR fixes that. Fixes #10559
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10595/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10595/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10595", "html_url": "https://github.com/huggingface/transformers/pull/10595", "diff_url": "https://github.com/huggingface/transformers/pull/10595.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10595.patch", "merged_at": 1615216763000 }
https://api.github.com/repos/huggingface/transformers/issues/10594
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10594/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10594/comments
https://api.github.com/repos/huggingface/transformers/issues/10594/events
https://github.com/huggingface/transformers/pull/10594
824,563,672
MDExOlB1bGxSZXF1ZXN0NTg2Nzc2Mzc2
10,594
[FeatureExtractorSavingUtils] Refactor PretrainedFeatureExtractor
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
# What does this PR do? This PR refactors the class `PreTrainedFeatureExtractor`. The following changes are done to move functionality that is shared between sequence and image feature extractors into a separate file. This should unblock the PRs of [DETR](https://github.com/huggingface/transformers/pull/9998), [VIT](https://github.com/huggingface/transformers/pull/10513), and [CLIP](https://github.com/huggingface/transformers/pull/10426) - `PreTrainedFeatureExtractor` is renamed to `PreTrainedSequenceFeatureExtractor` because it implicitly assumed that the it will treat only sequential inputs (a.k.a sequence of float values or a sequence of float vectors). `PreTrainedFeatureExtractor` was too general - All functionality that is shared between Image and Speech feature extractors (which IMO all relates to "saving" utilities) is moved to a `FeatureExtractorSavingUtilsMixin` - `BatchFeature` is moved from the `feature_extraction_sequence_utils.py` to `feature_extraction_common_utils.py` to be used by the `PreTrainedImageFeatureExtractor` class as well - The tests are refactored accordingly The following things were assumed before applying the changes. - In the mid-term future there will only be three modalities in HF: text, sequential features (value sequence, vector sequence), image features (2D non-sequential array) - Models, such as ViT, DETR & CLIP will call their "preprocessor" `VITFeatureExtractor`, .... IMO, feature extractor is also a fitting name for image recognition (see: https://en.wikipedia.org/wiki/Feature_extraction) so that it is assumed that for image-text or image-only models there will be a `PreTrainedImageFeatureExtractor`, a `VITFeatureExtractor`, (and maybe a VITTokenizer & VITProcessor as well, but not necessary). For vision-text models that do require both a tokenizer and a feature extractor such as CLIP it is assumed that the classes `CLIPFeatureExtractor` and `CLIPTokenizer` are wrapped into a `CLIPProcessor` class similar to `Wav2Vec2Processor`. I think this is the most important assumption that is taken here, so we should make sure we are on the same page here @LysandreJik @sgugger @patil-suraj @NielsRogge - Image - Text or Image - only models won't require a `BatchImageFeature` or `BatchImage`, but can just use `BatchFeature`. From looking at the code in @NielsRogge's PR here: https://github.com/huggingface/transformers/pull/10513 this seems to be the case. # Backwards compatibility: The class `PreTrainedFeatureExtractor` was accessible via: ```python from transformers import PreTrainedFeatureExtractor ``` but is now replaced by `PreTrainedSequenceFeatureExtractor`. However, since `PreTrainedFeatureExtractor` so far was only available on master, this change is OK IMO.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10594/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10594/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10594", "html_url": "https://github.com/huggingface/transformers/pull/10594", "diff_url": "https://github.com/huggingface/transformers/pull/10594.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10594.patch", "merged_at": 1615281419000 }
https://api.github.com/repos/huggingface/transformers/issues/10593
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10593/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10593/comments
https://api.github.com/repos/huggingface/transformers/issues/10593/events
https://github.com/huggingface/transformers/pull/10593
824,500,587
MDExOlB1bGxSZXF1ZXN0NTg2NzI1MDY2
10,593
Enable torch 1.8.0 on GPU CI
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
This enables torch 1.8.0 on the GPU CI, and disables the torch-scatter tests today as they're creating issues and blocking the CI pipeline.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10593/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10593", "html_url": "https://github.com/huggingface/transformers/pull/10593", "diff_url": "https://github.com/huggingface/transformers/pull/10593.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10593.patch", "merged_at": 1615205503000 }
https://api.github.com/repos/huggingface/transformers/issues/10592
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10592/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10592/comments
https://api.github.com/repos/huggingface/transformers/issues/10592/events
https://github.com/huggingface/transformers/issues/10592
824,345,238
MDU6SXNzdWU4MjQzNDUyMzg=
10,592
CUBLAS_STATUS_INTERNAL_ERROR at examples/question-answering/run_qa.py
{ "login": "LozanoAlvarezb", "id": 76513765, "node_id": "MDQ6VXNlcjc2NTEzNzY1", "avatar_url": "https://avatars.githubusercontent.com/u/76513765?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LozanoAlvarezb", "html_url": "https://github.com/LozanoAlvarezb", "followers_url": "https://api.github.com/users/LozanoAlvarezb/followers", "following_url": "https://api.github.com/users/LozanoAlvarezb/following{/other_user}", "gists_url": "https://api.github.com/users/LozanoAlvarezb/gists{/gist_id}", "starred_url": "https://api.github.com/users/LozanoAlvarezb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LozanoAlvarezb/subscriptions", "organizations_url": "https://api.github.com/users/LozanoAlvarezb/orgs", "repos_url": "https://api.github.com/users/LozanoAlvarezb/repos", "events_url": "https://api.github.com/users/LozanoAlvarezb/events{/privacy}", "received_events_url": "https://api.github.com/users/LozanoAlvarezb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! I don't think torch supports CUDA 11.2 yet. See https://github.com/pytorch/pytorch/issues/50232#issuecomment-777703998", "I had a similar issue with torch 1.8 and solved it by downgrading to 1.7.1", "> Hi! I don't think torch supports CUDA 11.2 yet. See [pytorch/pytorch#50232 (comment)](https://github.com/pytorch/pytorch/issues/50232#issuecomment-777703998)\r\n\r\nThanks for the quick response. I just tested the script with CUDA11.1 and it worked just fine.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,618
1,618
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.0.dev0 - Platform: Linux-5.10.20-1-lts-x86_64-with-glibc2.2.5 - Python version: 3.8.3 - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help - @LysandreJik - @sgugger ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce ``` #!/bin/bash python3 -m venv env source env/bin/activate pip install torch pip install datasets git clone https://github.com/huggingface/transformers.git pip install -e transformers/ python transformers/examples/question-answering/run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 1 \ --learning_rate 3e-5 \ --num_train_epochs 4 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir ./models/ ``` ``` 03/08/2021 09:54:08 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 2distributed training: False, 16-bits training: False 03/08/2021 09:54:08 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=./models/, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=1, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=3e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs/Mar08_09-54-06_inf-105-gpu-1, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=./models/, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=[], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, _n_gpu=2) 03/08/2021 09:54:08 - WARNING - datasets.builder - Reusing dataset squad (/home/blozano/.cache/huggingface/datasets/squad/plain_text/1.0.0/0fd9e01360d229a22adfe0ab7e2dd2adc6e2b3d6d3db03636a51235947d4c6e9) [INFO|configuration_utils.py:463] 2021-03-08 09:54:09,206 >> loading configuration file https://huggingface.co/bert-base-uncased/resolve/main/config.json from cache at /home/blozano/.cache/huggingface/transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.637c6035640bacb831febcc2b7f7bee0a96f9b30c2d7e9ef84082d9f252f3170 [INFO|configuration_utils.py:499] 2021-03-08 09:54:09,207 >> Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.4.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 30522 } [INFO|configuration_utils.py:463] 2021-03-08 09:54:09,509 >> loading configuration file https://huggingface.co/bert-base-uncased/resolve/main/config.json from cache at /home/blozano/.cache/huggingface/transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.637c6035640bacb831febcc2b7f7bee0a96f9b30c2d7e9ef84082d9f252f3170 [INFO|configuration_utils.py:499] 2021-03-08 09:54:09,510 >> Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.4.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 30522 } [INFO|tokenization_utils_base.py:1721] 2021-03-08 09:54:10,138 >> loading file https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt from cache at /home/blozano/.cache/huggingface/transformers/45c3f7a79a80e1cf0a489e5c62b43f173c15db47864303a55d623bb3c96f72a5.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99 [INFO|tokenization_utils_base.py:1721] 2021-03-08 09:54:10,138 >> loading file https://huggingface.co/bert-base-uncased/resolve/main/tokenizer.json from cache at /home/blozano/.cache/huggingface/transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4 [INFO|modeling_utils.py:1051] 2021-03-08 09:54:10,501 >> loading weights file https://huggingface.co/bert-base-uncased/resolve/main/pytorch_model.bin from cache at /home/blozano/.cache/huggingface/transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f [WARNING|modeling_utils.py:1158] 2021-03-08 09:54:12,594 >> Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] - This IS expected if you are initializing BertForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:1169] 2021-03-08 09:54:12,594 >> Some weights of BertForQuestionAnswering were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['qa_outputs.weight', 'qa_outputs.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 03/08/2021 09:54:12 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/blozano/.cache/huggingface/datasets/squad/plain_text/1.0.0/0fd9e01360d229a22adfe0ab7e2dd2adc6e2b3d6d3db03636a51235947d4c6e9/cache-a560de6b2f76743b.arrow 03/08/2021 09:54:12 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/blozano/.cache/huggingface/datasets/squad/plain_text/1.0.0/0fd9e01360d229a22adfe0ab7e2dd2adc6e2b3d6d3db03636a51235947d4c6e9/cache-15b011eed342eca6.arrow [INFO|trainer.py:471] 2021-03-08 09:54:15,885 >> The following columns in the evaluation set don't have a corresponding argument in `BertForQuestionAnswering.forward` and have been ignored: example_id, offset_mapping. [INFO|trainer.py:929] 2021-03-08 09:54:15,937 >> ***** Running training ***** [INFO|trainer.py:930] 2021-03-08 09:54:15,937 >> Num examples = 88524 [INFO|trainer.py:931] 2021-03-08 09:54:15,937 >> Num Epochs = 4 [INFO|trainer.py:932] 2021-03-08 09:54:15,937 >> Instantaneous batch size per device = 1 [INFO|trainer.py:933] 2021-03-08 09:54:15,937 >> Total train batch size (w. parallel, distributed & accumulation) = 2 [INFO|trainer.py:934] 2021-03-08 09:54:15,937 >> Gradient Accumulation steps = 1 [INFO|trainer.py:935] 2021-03-08 09:54:15,937 >> Total optimization steps = 177048 0%| | 0/177048 [00:00<?, ?it/s]Traceback (most recent call last): File "transformers/examples/question-answering/run_qa.py", line 507, in <module> main() File "transformers/examples/question-answering/run_qa.py", line 481, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/blozano/finetune_qa/transformers/src/transformers/trainer.py", line 1036, in train tr_loss += self.training_step(model, inputs) File "/home/blozano/finetune_qa/transformers/src/transformers/trainer.py", line 1420, in training_step loss = self.compute_loss(model, inputs) File "/home/blozano/finetune_qa/transformers/src/transformers/trainer.py", line 1452, in compute_loss outputs = model(**inputs) File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/blozano/finetune_qa/transformers/src/transformers/models/bert/modeling_bert.py", line 1775, in forward outputs = self.bert( File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/blozano/finetune_qa/transformers/src/transformers/models/bert/modeling_bert.py", line 971, in forward encoder_outputs = self.encoder( File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/blozano/finetune_qa/transformers/src/transformers/models/bert/modeling_bert.py", line 568, in forward layer_outputs = layer_module( File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/blozano/finetune_qa/transformers/src/transformers/models/bert/modeling_bert.py", line 456, in forward self_attention_outputs = self.attention( File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/blozano/finetune_qa/transformers/src/transformers/models/bert/modeling_bert.py", line 387, in forward self_outputs = self.self( File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/blozano/finetune_qa/transformers/src/transformers/models/bert/modeling_bert.py", line 253, in forward mixed_query_layer = self.query(hidden_states) File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 94, in forward return F.linear(input, self.weight, self.bias) File "/home/blozano/finetune_qa/env/lib/python3.8/site-packages/torch/nn/functional.py", line 1753, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)` ``` ``` NVIDIA-SMI 460.56 Driver Version: 460.56 CUDA Version: 11.2 ``` ## Expected behavior The expected default behavior as stated in transformers/examples/question-answering/README.md
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10592/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10591
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10591/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10591/comments
https://api.github.com/repos/huggingface/transformers/issues/10591/events
https://github.com/huggingface/transformers/pull/10591
824,345,213
MDExOlB1bGxSZXF1ZXN0NTg2NTk1NzA5
10,591
Fix typo in docstring for pipeline
{ "login": "silvershine157", "id": 22359626, "node_id": "MDQ6VXNlcjIyMzU5NjI2", "avatar_url": "https://avatars.githubusercontent.com/u/22359626?v=4", "gravatar_id": "", "url": "https://api.github.com/users/silvershine157", "html_url": "https://github.com/silvershine157", "followers_url": "https://api.github.com/users/silvershine157/followers", "following_url": "https://api.github.com/users/silvershine157/following{/other_user}", "gists_url": "https://api.github.com/users/silvershine157/gists{/gist_id}", "starred_url": "https://api.github.com/users/silvershine157/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silvershine157/subscriptions", "organizations_url": "https://api.github.com/users/silvershine157/orgs", "repos_url": "https://api.github.com/users/silvershine157/repos", "events_url": "https://api.github.com/users/silvershine157/events{/privacy}", "received_events_url": "https://api.github.com/users/silvershine157/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
# What does this PR do? Fixed typo in docstring for pipeline ("conversation" -> "conversational") <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10591/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10591", "html_url": "https://github.com/huggingface/transformers/pull/10591", "diff_url": "https://github.com/huggingface/transformers/pull/10591.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10591.patch", "merged_at": 1615198204000 }
https://api.github.com/repos/huggingface/transformers/issues/10590
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10590/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10590/comments
https://api.github.com/repos/huggingface/transformers/issues/10590/events
https://github.com/huggingface/transformers/pull/10590
824,341,483
MDExOlB1bGxSZXF1ZXN0NTg2NTkyNDY1
10,590
[M2M100] fix positional embeddings
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
# What does this PR do? The torchscript tests for `M2M100` are failing on master. This is because the `weights` in `M2M100SinusoidalPositionalEmbedding` are initially not on the same device as the rest of the parameters. The PR makes the `weights` as `nn.Parameter` so they'll be on the same device.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10590/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10590", "html_url": "https://github.com/huggingface/transformers/pull/10590", "diff_url": "https://github.com/huggingface/transformers/pull/10590.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10590.patch", "merged_at": 1615199779000 }
https://api.github.com/repos/huggingface/transformers/issues/10589
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10589/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10589/comments
https://api.github.com/repos/huggingface/transformers/issues/10589/events
https://github.com/huggingface/transformers/issues/10589
824,326,217
MDU6SXNzdWU4MjQzMjYyMTc=
10,589
Small question about BertForMaskedLM usage on TF model
{ "login": "rmxkyz", "id": 56808566, "node_id": "MDQ6VXNlcjU2ODA4NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/56808566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rmxkyz", "html_url": "https://github.com/rmxkyz", "followers_url": "https://api.github.com/users/rmxkyz/followers", "following_url": "https://api.github.com/users/rmxkyz/following{/other_user}", "gists_url": "https://api.github.com/users/rmxkyz/gists{/gist_id}", "starred_url": "https://api.github.com/users/rmxkyz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rmxkyz/subscriptions", "organizations_url": "https://api.github.com/users/rmxkyz/orgs", "repos_url": "https://api.github.com/users/rmxkyz/repos", "events_url": "https://api.github.com/users/rmxkyz/events{/privacy}", "received_events_url": "https://api.github.com/users/rmxkyz/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Hi! In order to use either of those weights you'll have to convert them to a HuggingFace format.\r\n\r\nFor that you have two available scripts:\r\n- From [TF1](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py)\r\n- From [TF2](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py)", "Hi @LysandreJik! Thanks for always being kind and active to solve our questions, I have checked both TF1 and TF2 convert script which TF1 works perfectly to convert bert_model.ckpt to pytorch_model.bin, while the TF2 will always give error message as picture here:\r\n![1](https://user-images.githubusercontent.com/56808566/110410789-6caaa480-80c4-11eb-9814-228961224342.PNG)\r\nTo add on, in my task I keep training checkpoint model on my dataset by running [run_pretraining.py ](https://github.com/tensorflow/models/blob/master/official/nlp/bert/run_pretraining.py), so according to the script description on TF2 convert script, I believe the problem is mlm head which I have to add something above the convert script to convert model right? or keep running run_pretraining.py won't give such header on it?\r\n![2](https://user-images.githubusercontent.com/56808566/110411072-e0e54800-80c4-11eb-9ef1-38cf0a1a144e.PNG)\r\nIf the header exist, how can I solve this problem? Even though I tried BERT for months but I am not that confident to say I understand how it works. In #9941 I have tried to add elif condition on m_name but seems it's a bad approach on this question. \r\n\r\nAgain, thanks for the reply! Really appreciated for giving me such direction to it!\r\n\r\n", "Hmmm, I see this is an issue indeed! Could you let me know how you obtained your TF2 checkpoint so that I may check it on my side?\r\n\r\nYou're welcome, happy to help :)", "Sure! Due to the [repository](https://github.com/tensorflow/models/tree/master/official/nlp/bert#access-to-pretrained-checkpoints) haven't release chinese model yet, so the way I obtained this model is by these steps, \r\n1. Download [bert-base-chinese](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip) from google-bert\r\n2. Use this [script](https://github.com/tensorflow/models/blob/master/official/nlp/bert/tf2_encoder_checkpoint_converter.py) to convert tf1 checkpoint to tf2 with following args\r\n`python tf2_encoder_checkpoint_converter.py --checkpoint_to_convert=$BASE_DIR/bert_model.ckpt --converted_checkpoint_path=tmp/ --bert_config_file=$BASE_DIR/bert_config.json`\r\nnote: $BASE_DIR is the path to the model directory (which is the model done by step 1 & 2)\r\n3. Using [create_pretraining_data.py](https://github.com/tensorflow/models/blob/master/official/nlp/data/create_pretraining_data.py) to create the dataset that I want to keep training by its domain, the simple data I use can be found [here](https://drive.google.com/file/d/163IAWfgQZ1TIN6WYzWdoRwmFXDzGIRxj/view?usp=sharing).\r\nThe args I use are like this\r\n`python create_pretraining_data.py --input_file=./sample.txt --output_file=ModelRecord --vocab_file=./$BASE_DIR/vocab.txt --max_seq_length=128 --max_predictions_per_seq=19 --masked_lm_prob=0.15 --random_seed=46 --dupe_factor=1`\r\n4. Now we have ModelRecord, use tf2 version [run_pretraining.py](https://github.com/tensorflow/models/blob/master/official/nlp/bert/run_pretraining.py) to training model on these instances in ModelRecord. The args I use look like this\r\n`python run_pretraining.py --input_file=ModelRecord --output_dir=tmp/Model_L128_B32 --bert_config_file=bert_config.json --max_seq_length=128 --max_predictions_per_seq=19 --do_train=True --train_batch_size=32 --num_train_steps=2000000 --num_warmup_steps=2000 --learning_rate=1e-5 --save_checkpoints_step=60000 --keep_checkpoint_max=4`\r\n5. After training, the directory in Model_L128_B32(where model saved) containing these checkpoint,\r\n![4](https://user-images.githubusercontent.com/56808566/110595525-0fdce600-81b9-11eb-918a-4a418b049d02.PNG) \r\nI load the checkpoint in directory \"pretrained\", and rename the checkpoint I want to evaluate to model.ckpt.data-00000-of-00001, ckpt_index to model.ckpt.index, with config and vocab in the directory. \r\n\r\n### System Info\r\nMy environments are: (using pipreqs)\r\nPython version: 3.7.5\r\nCUDA used to build PyTorch: cuda_11.0_bu.TC445_37.28845127_0\r\nOS: Ubuntu 18.04.5 LTS\r\nGCC version: 7.5.0\r\n\r\n### Versions of relevant libraries: (using pipreqs)\r\nnumpy==1.19.5\r\ntransformers==4.2.2\r\nsix==1.15.0\r\ntorch==1.7.1+cu110\r\ntensorflow_gpu==2.2.0\r\ngin_config==0.1.1\r\nabsl_py==0.11.0\r\ntensorflow_hub==0.11.0\r\nsentencepiece==0.1.94\r\nabsl==0.0\r\ntorchvision==0.8.2+cu110\r\nbert4keras==0.9.9\r\ngin==0.1.006\r\ntensorflow==2.4.1\r\n\r\n\r\nI think that's all of it, if I miss any step please inform me with no hesitate, I might get wrong :P \r\nBest regards!\r\n", "I see, the conversion script should work for that use-case. Is there a way for you to share the checkpoints you have obtained so that I can take a look? You can share them through the hub under your username.", "I test for a while and also follow what #8504 did, but now sure why not success\r\n![5](https://user-images.githubusercontent.com/56808566/110792383-47798a00-82ae-11eb-8be3-b73ea4f615bd.PNG)\r\n\r\ninstead, I upload to [mydrive](https://drive.google.com/drive/folders/1e1xHXZQSEpHBI0YF6xLpUY2Ne8Zi7Cjl?usp=sharing), I will try it again tomorrow see if I miss something.", "Ah, it's probably because you didn't install git-lfs/didn't track the files! Doing this in the repo should help:\r\n```\r\ngit-lfs install\r\ngit-lfs track <name_of_your_large_file>\r\n```\r\nYou can check it's being correctly tracked:\r\n```\r\ngit-lfs track\r\n```\r\ncheck for the name of your large file and ensure it's being tracked by git-lfs:\r\n```\r\nObjects to be committed:\r\n\r\n\t.gitattributes (Git: 31aaf10)\r\n\tREADME.md (Git: 358442a)\r\n\tconfig.json (Git: 57a54a8)\r\n \r\n[...] \r\n\tpytorch_model.bin (LFS: 6a9a9a5)\r\n ^^^\r\n[...]\r\n\tspecial_tokens_map.json (Git: e3ec7ab)\r\n\ttokenizer_config.json (Git: ab033df)\r\n\tvocab.txt (Git: 4d96f93)\r\n\r\n```\r\nThen you should be able to push without any issue", "LGTM! The only issue I encounter is I don't actually know how these file being add. \r\n\r\n``` \r\n\tpytorch_model.bin (LFS: 6a9a9a5)\r\n ^^^\r\n```\r\nMy occasion for track and commit is\r\n![2](https://user-images.githubusercontent.com/56808566/110880117-12565180-8319-11eb-8877-3a1c928966e5.PNG)\r\n![1](https://user-images.githubusercontent.com/56808566/110880101-0bc7da00-8319-11eb-87ad-5ee6686a2d01.PNG)\r\n \r\nI assume this will be shown at the git commit -m \"comment\" phase? Anyway I finally upload my [model](https://huggingface.co/rmxkyz/zh_tf2/tree/main), weeee! \r\n", "Fantastic! I don't have time to look at it today, but I'll try and do that on Monday. Thanks!", "Thanks for your kindly support! I am not rush with this experiment, so do it ease when you're free :D", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
Hi everyone, I was using BertForMaskedLM to predict possible candidate words in the content. For example: **cat like to drink [MASK], so am I.** on [tf1 bert-pretrained model](https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip) output would be `[[{'sequence': '[CLS] cat like to drink it, so am i. [SEP]', 'score': 0.22914603352546692, 'token': 2009, 'token_str': 'i t'}, {'sequence': '[CLS] cat like to drink water, so am i. [SEP]', 'score': 0.1088637188076973, 'token': 2300, 'token_str': 'w a t e r'}, {'sequence': '[CLS] cat like to drink blood, so am i. [SEP]', 'score': 0.1075243279337883, 'token': 2668, 'token_str': 'b l o o d'}]]` However, in [tf2 bert-pretrained model](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/uncased_L-12_H-768_A-12.tar.gz) output gives, `[[{'sequence': '[CLS] cat like to drinkโˆ˜, so am i. [SEP]', 'score': 0.0002078865800285712, 'token': 30126, 'token_str': '# # โˆ˜'}, {'sequence': '[CLS] cat like to drink zinc, so am i. [SEP]', 'score': 0.00020266805950086564, 'token': 15813, 'token_str': 'z i n c'}, {'sequence': '[CLS] cat like to drink organic, so am i. [SEP]', 'score': 0.00019718357361853123, 'token': 7554, 'token_str': 'o r g a n i c'}]] ` It seems tf2 will randomly give unreasonable words, but I have no idea why causing this. I notice the implementation of BERT on TF1 and TF2 are different (I think so? since tf2 use keras as main core implementation), however, does the architecture vary from TF1 pretrained model to TF2 that cause BertForMaskedLM give different embedding? Does BertForMaskedLM support reading weight on TF2 model? The code I am using are something like this ``` config = BertConfig.from_pretrained("./pretrained/tf2") config.is_decoder=False tokenizer = BertTokenizer.from_pretrained("./pretrained/tf2") model = BertForMaskedLM.from_pretrained("./pretrained/tf2",config=config,from_tf=True) ``` tf1 checkpoint in FileZilla ![tf1](https://user-images.githubusercontent.com/56808566/110295862-96b08800-802c-11eb-985e-90b12449a21a.PNG) tf2 ![tf2](https://user-images.githubusercontent.com/56808566/110295918-a760fe00-802c-11eb-84a5-b35dd101da7f.PNG) Am I using wrong API(like should I use other API instead other than BertForMaskedLM) or any place I have to change to make it work? Any suggestion or reflection are sincerely grateful, thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10589/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10588
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10588/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10588/comments
https://api.github.com/repos/huggingface/transformers/issues/10588/events
https://github.com/huggingface/transformers/issues/10588
824,324,943
MDU6SXNzdWU4MjQzMjQ5NDM=
10,588
Can't reproduce xlm-roberta-large finetuned result on XNLI
{ "login": "ntudy", "id": 20146770, "node_id": "MDQ6VXNlcjIwMTQ2Nzcw", "avatar_url": "https://avatars.githubusercontent.com/u/20146770?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ntudy", "html_url": "https://github.com/ntudy", "followers_url": "https://api.github.com/users/ntudy/followers", "following_url": "https://api.github.com/users/ntudy/following{/other_user}", "gists_url": "https://api.github.com/users/ntudy/gists{/gist_id}", "starred_url": "https://api.github.com/users/ntudy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ntudy/subscriptions", "organizations_url": "https://api.github.com/users/ntudy/orgs", "repos_url": "https://api.github.com/users/ntudy/repos", "events_url": "https://api.github.com/users/ntudy/events{/privacy}", "received_events_url": "https://api.github.com/users/ntudy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,615
1,615
1,615
NONE
null
# โ“ Questions & Help I'm trying to finetune `xlm-roberta-large` on MNLI English training data and make zero-shot classification on XNLI dataset. However, I found that `xlm-roberta-large` is super sensitive to hyper parameters. The reported average accuracy is 80.9, while my model can only achieve 79.74, which is 1% less than the reported accuracy. I used Adam optimizer with 5e-6 learning rate and the batch size is 16. Any one can suggest better hyperparameters to reproduce the XNLI result of `xlm-roberta-large`?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10588/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10587
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10587/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10587/comments
https://api.github.com/repos/huggingface/transformers/issues/10587/events
https://github.com/huggingface/transformers/pull/10587
824,304,025
MDExOlB1bGxSZXF1ZXN0NTg2NTYwNDU3
10,587
[WIP] Add Linformer
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Unstale", "This Linformer's ForSequenceClassification is not work. I finished mlm pretrain with 1.14loss(100epoch, wiki-cn). But in afqmf, acc only reach 0.6899", "Hi @luoda888,\r\n\r\ngreat to see you're already using Linformer! Did you pretrain Linformer on several GPUs using the `run_mlm.py` script? What is afqmf?\r\n\r\nNote that in the [original paper](https://arxiv.org/abs/2006.04768), they state that \"All models are pretrained with the masked-language-modeling (MLM) objective, and the training for all experiments are parallelized across 64 Tesla V100 GPUs with 250k updates.\"", "@NielsRogge since the weights for `linformer` are not yet released, maybe we can add this model in the the `research_projects` dir, similar to [performer](https://github.com/huggingface/transformers/tree/master/examples/research_projects/performer) and add a training/fine-tuning script there.", "> Hi @luoda888,\r\n> \r\n> great to see you're already using Linformer! Did you pretrain Linformer on several GPUs using the `run_mlm.py` script? What is afqmf?\r\n> \r\n> Note that in the [original paper](https://arxiv.org/abs/2006.04768), they state that \"All models are pretrained with the masked-language-modeling (MLM) objective, and the training for all experiments are parallelized across 64 Tesla V100 GPUs with 250k updates.\"\r\n\r\nThanks reply.\r\n1. I pre-trained the linformer model from scratch, using Wikipedia Chinese corpus. Set text maxlen=128, k=64, on 10 nodes (each with 8*32g V100) (which means I use 80*v100), 2.5 hours of pre-training. Using LinkerForMaskedLM, mlm loss reached 1.14 when training stopped.\r\n2. I use the trained model for validation on downstream tasks, and afqmf is the semantic similarity (text matching) task in Chinese (https://github.com/CLUEbenchmark/CLUE). According to the report, the accuracy of bert-base-chinese is 74.16% when the parameter settings of bs=16, lr=2e-5, maxlen=128, epoch=3 are fine-tuned. I used the parameters of bs=256, lr=2e-5, maxlen=128, epoch=3 and the bert-base-chinese accuracy was 72.81%. However, when the LinformerForSequenceClassifier is used for classification, the AUC is only 0.5, and the accuracy is only 0.6899 (that is, all predictions are 0 categories).\r\n3. For the specific training details, I use the Trainer interface, distributed pre-training. The config of the informer is seq_length=128, LinformerConfig(vocab_size=vocab_size, seq_length=seq_length, share_projection=True, k=seq_length//2) . I use BertWordPieceTokenizer for tokenizer, random mask (DataCollatorForLanguageModeling). with FP16 & train_batch_size=256, gradient_accumulation_steps=1, dataloader_num_workers=4, model.save_pretrained(output_files).\r\n\r\n_**I analyze possible causes.**_\r\n1. The fine-tuning script can run properly, and the bert-base-chinese open source weight is executed as expected.\r\n2. Is it possible that the tokenizer is faulty? Inconsistent in pre-training and fine-tuning.\r\n3. Press BertConfig again to perform pre-training. The fine-tuning effect is also AUC = 0.5. So I'm guessing that Bert Word Piece Tokenizer has some problems with Chinese.\r\n\r\nIf you don't mind, I'd like to share the code with you to locate the problem.", "> @NielsRogge since the weights for `linformer` are not yet released, maybe we can add this model in the the `research_projects` dir, similar to [performer](https://github.com/huggingface/transformers/tree/master/examples/research_projects/performer) and add a training/fine-tuning script there.\r\n\r\nI think it's a very good idea, and it would be nice to have a torch-only version available instead of jax, flax. Also, I've observed that transformers_plus_performers reproduce the performer paper, and I don't know if you've noticed.", "I'd love you to come up with a pre-training script from the beginning on the Chinese Wikipedia and how to fine-tune the Chinese downstream tasks. In addition, it is possible to design more pre-training tasks (SpanMask + SBO, StuncBert(TGS), ASP, etc.) and design more fine-tuned multi-task learning (MT-DNN).\r\n\r\nBTW, nezha is also can add in models. https://github.com/lonePatient/NeZha_Chinese_PyTorch. I test it in Chinese~\r\nBTWWW, PGD & FGM & FreeAT & FreeLB & SMART also can add in modeling_language or fine-tune~~~~", "That's really interesting to read! Wow, 80 V100's, that's a lot.\r\n\r\nSome remarks:\r\n\r\n* I still need someone to verify my implementation of `LinformerSelfAttention` in `modeling_linformer.py`, as I'm currently the only one that implemented it, I'd like to have a second opinion to know for sure my implementation is correct. I used [this implementation](https://github.com/mlpen/Nystromformer/blob/main/code/attention_linformer.py) (from the author of Nystrรถmformer, who benchmarked several efficient self-attention implementations) as a reference. I also checked @lucidrains' implementation, available [here](https://github.com/lucidrains/linformer). However, as you see the loss going down, it might indeed be a tokenizer issue (I'm fairly sure my implementation is correct). \r\n* In both the original paper and the Nystrรถmformer implementation, it looks like they rely on a RoBERTa encoder rather than BERT, hence they also use `RobertaTokenizer` rather than `BertTokenizer`. So this is something that might have to be updated (as mentioned at the top of this PR). \r\n* Did you use `BertTokenizer.from_pretrained(\"bert-base-chinese\")`? Or did you train the tokenizer on Chinese Wikipedia before using it?\r\n\r\nAnd indeed, it would be great if we can provide the same functionality for Chinese, and introduce pre-training/fine-tuning scripts for Chinese. ", "> That's really interesting to read! Wow, 80 V100's, that's a lot.\r\n> \r\n> Some remarks:\r\n> \r\n> * I still need someone to verify my implementation of `LinformerSelfAttention` in `modeling_linformer.py`, as I'm currently the only one that implemented it, I'd like to have a second opinion to know for sure my implementation is correct. I used [this implementation](https://github.com/mlpen/Nystromformer/blob/main/code/attention_linformer.py) (from the author of Nystrรถmformer, who benchmarked several efficient self-attention implementations) as a reference. I also checked @lucidrains' implementation, available [here](https://github.com/lucidrains/linformer). However, as you see the loss going down, it might indeed be a tokenizer issue (I'm fairly sure my implementation is correct).\r\n> * In both the original paper and the Nystrรถmformer implementation, it looks like they rely on a RoBERTa encoder rather than BERT, hence they also use `RobertaTokenizer` rather than `BertTokenizer`. So this is something that might have to be updated (as mentioned at the top of this PR).\r\n> * Did you use `BertTokenizer.from_pretrained(\"bert-base-chinese\")`? Or did you train the tokenizer on Chinese Wikipedia before using it?\r\n> \r\n> And indeed, it would be great if we can provide the same functionality for Chinese, and introduce pre-training/fine-tuning scripts for Chinese.\r\n\r\n```\r\nlimit_alphabat = 30000\r\nspecial_tokens = [\"[PAD]\", \"[UNK]\", \"[CLS]\", \"[SEP]\", \"[MASK]\"]\r\n\r\ntokenizer = tokenizers.BertWordPieceTokenizer(clean_text=True, handle_chinese_chars=True, strip_accents=True, lowercase=True)\r\ntokenizer.train(\r\n files,\r\n vocab_size = args.vocab_size,\r\n min_frequency = args.min_frequency,\r\n show_progress = True,\r\n special_tokens = special_tokens,\r\n limit_alphabet = limit_alphabat,\r\n wordpieces_prefix = \"##\",\r\n )\r\ntry:\r\n os.mkdir(output_files)\r\nexcept:\r\n pass\r\ntokenizer.save_model(output_files)\r\n```\r\nMultiProcessing for tokenizer (1.3GB corpus is too large). Copy K times for dp mask (follow roberta)\r\n```\r\ndef _convert_to_transformer_inputs(question, answer, tokenizer, max_sequence_length):\r\n \"\"\"Converts tokenized input to ids, masks and segments for transformer (including bert)\"\"\"\r\n \r\n def return_id(str1, str2, truncation_strategy, length):\r\n\r\n inputs = tokenizer.encode_plus(str1, str2,\r\n add_special_tokens=True,\r\n max_length=length,\r\n truncation_strategy=truncation_strategy,\r\n truncation=True\r\n )\r\n \r\n input_ids = inputs[\"input_ids\"]\r\n input_masks = [1] * len(input_ids)\r\n input_segments = inputs[\"token_type_ids\"]\r\n padding_length = length - len(input_ids)\r\n padding_id = tokenizer.pad_token_id\r\n input_ids = input_ids + ([padding_id] * padding_length)\r\n input_masks = input_masks + ([0] * padding_length)\r\n input_segments = input_segments + ([0] * padding_length)\r\n \r\n return [input_ids, input_masks, input_segments]\r\n \r\n input_ids_q, input_masks_q, input_segments_q = return_id(\r\n question, answer, 'longest_first', max_sequence_length)\r\n \r\n return input_ids_q\r\n\r\ndef compute_input_arrays(lines, tokenizer, max_sequence_length):\r\n \r\n input_ids = Parallel(backend='multiprocessing', n_jobs=args.n_jobs, batch_size=512)\\\r\n (delayed(_convert_to_transformer_inputs)(line, None, tokenizer, max_sequence_length) for line in tqdm(lines))\r\n \r\n return [i for i in np.asarray(input_ids, dtype=np.int32)]\r\n\r\ntokenizer = tfs.BertTokenizer.from_pretrained(output_files + '/vocab.txt', maxlen=512)\r\nwith open(files, encoding=\"utf-8\") as f:\r\n lines = [line for line in tqdm(f.read().splitlines()) if (len(line) > 0 and not line.isspace())]\r\ndataset = compute_input_arrays(lines, tokenizer, args.maxlen)\r\n\r\ndp_mask = args.dp_mask\r\nshuffle = deepcopy(dataset)\r\nfor i in tqdm(range(dp_mask)):\r\n random.shuffle(shuffle)\r\n dataset.extend(shuffle)\r\n\r\nnp.save(files + \"-dpmask.npy\", dataset)\r\nlogger.info(\"Sentence Cut Finish\")\r\n```\r\n\r\n1. In many Chinese pre-training models, Roberta Tokenizer is often used as Bert Tokenizer, such as hfl:/roberta-chinese-wwm.\r\n\r\n2. The same problem occurs when I use the BertTokenizer to train the Bert. After the same parameters are trained, the AUC is still 0.5.\r\n\r\n```\r\nclass LineByLineTextDataset(Dataset):\r\n \"\"\"\r\n This will be superseded by a framework-agnostic approach soon.\r\n \"\"\"\r\n\r\n def __init__(self, tokenizer: PreTrainedTokenizer, file_path: str, block_size: int):\r\n assert os.path.isfile(file_path), f\"Input file path {file_path} not found\"\r\n # Here, we do not cache the features, operating under the assumption\r\n # that we will soon use fast multithreaded tokenizers from the\r\n # `tokenizers` repo everywhere =)\r\n\r\n lines = np.load(files + '-dpmask.npy')\r\n if args.debug:\r\n lines = lines[:20000]\r\n \r\n self.examples = [{\"input_ids\": torch.tensor(e, dtype=torch.long)} for e in lines]\r\n\r\n def __len__(self):\r\n return len(self.examples)\r\n\r\n def __getitem__(self, i) -> Dict[str, torch.tensor]:\r\n return self.examples[i]\r\n\r\n\r\nvocab_size = 21128\r\nseq_length = args.max_seq_length\r\nconfig = LinformerConfig(vocab_size=vocab_size, seq_length=seq_length, share_projection=True, k=seq_length//2)\r\nmodel = LinformerForMaskedLM(config=config)\r\n\r\ntokenizer = tfs.BertTokenizer.from_pretrained(output_files + '/vocab.txt', maxlen=seq_length)\r\ndataset = LineByLineTextDataset(tokenizer=tokenizer, file_path=files, block_size=seq_length)\r\ndatacol = tfs.DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)\r\n\r\ntrain_args = tfs.TrainingArguments(\r\n output_dir = output_files,\r\n overwrite_output_dir = True,\r\n num_train_epochs = args.train_epochs,\r\n per_device_train_batch_size = args.train_batch_size,\r\n gradient_accumulation_steps = args.gradient_accumulation_steps,\r\n save_steps = 10000,\r\n logging_steps = 500,\r\n save_total_limit = 10,\r\n fp16 = args.fp16,\r\n prediction_loss_only = True,\r\n dataloader_num_workers = args.num_workers,\r\n local_rank = args.local_rank,\r\n disable_tqdm = False,\r\n)\r\n\r\nlogger.info(\"TrainingArguments Init\")\r\nlogger.info(train_args)\r\ndecay_parameters = get_parameter_names(model, [torch.nn.LayerNorm])\r\ndecay_parameters = [name for name in decay_parameters if \"bias\" not in name]\r\noptimizer_grouped_parameters = [\r\n { \"params\": [p for n, p in model.named_parameters() if n in decay_parameters], \"weight_decay\": 0.0,},\r\n { \"params\": [p for n, p in model.named_parameters() if n not in decay_parameters], \"weight_decay\": 0.0,}\r\n]\r\n\r\noptimizer_cls = torch.optim.Adam(optimizer_grouped_parameters, lr=args.learning_rate)\r\nlr_cls = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer_cls, T_max=5, eta_min=0)\r\n\r\ntrainer = tfs.Trainer(\r\n model = model,\r\n args = train_args,\r\n data_collator = datacol,\r\n train_dataset = dataset,\r\n optimizers = (optimizer_cls, lr_cls),\r\n)\r\n\r\ntrainer.train()\r\nmodel.save_pretrained(output_files)\r\n```", "> I still need someone to verify my implementation of LinformerSelfAttention in modeling_linformer.py, as I'm currently the only one that implemented it, I'd like to have a second opinion to know for sure my implementation is correct.\r\n\r\nI would be happy to take a look now. Would you mind opening another PR and adding this in `research_projects/linformer` dir ? Just the modeling and config file should be enough.\r\n\r\nAlso, would it be possible to make the implem more similar to the official implem in `fairseq` ? That way we could compare apples to apples. We could then train a small model in `fairseq` and then port and compare it with this implem , that would make it a bit easier to verify the model.", "> > I still need someone to verify my implementation of LinformerSelfAttention in modeling_linformer.py, as I'm currently the only one that implemented it, I'd like to have a second opinion to know for sure my implementation is correct.\r\n> \r\n> I would be happy to take a look now. Would you mind opening another PR and adding this in `research_projects/linformer` dir ? Just the modeling and config file should be enough.\r\n> \r\n> Also, would it be possible to make the implem more similar to the official implem in `fairseq` ? That way we could compare apples to apples. We could then train a small model in `fairseq` and then port and compare it with this implem , that would make it a bit easier to verify the model.\r\n\r\nlol. I can provide GPUs for testing.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "unstale", "What happened to this? \r\nasking for a friend :D ", "Hi,\r\n\r\nWell actually I wonder if we could investigate the issue with fine-tuning LinFormer. \r\n\r\n@luoda888 would you be able to pre-train the model on English data (Books/Wikipedia)?", "Hello,\r\nSounds preety cool tbh, \r\ni'll try to tinker with. lets see if i go anywhere", "> Hi,\r\n> \r\n> Well actually I wonder if we could investigate the issue with fine-tuning LinFormer.\r\n> \r\n> @luoda888 would you be able to pre-train the model on English data (Books/Wikipedia)?\r\n\r\nI will try it" ]
1,615
1,648
null
CONTRIBUTOR
null
# What does this PR do? This PR adds [Linformer: Self-Attention with Linear Complexity](https://arxiv.org/abs/2006.04768) by Facebook AI. Contrary to the regular Transformer, it has linear complexity in both space and time w.r.t. the sequence length, allowing it to be trained on much larger sequence lengths. I've created both a PT and TF version using the CookieCutter template (BERT-based, encoder-only). The only difference with BERT is that: - `LinformerSelfAttention` looks a bit different compared to `BertSelfAttention`. The Linformer model adds 2 projection matrices to the keys and values, which project the sequence dimension (which is of size 512 by default) to a lower dimension k (such as `k=256`). As you'll see, I do not use the `extended_attention_mask` function which is used by other models. Instead, I cast the attention mask across the different attention heads by simply writing `attention_mask[:,None,:,None]`, which I then multiply by the keys and values, before projecting to the lower dimension. - One can choose to use the same projection matrix for both keys and values. This is determined by the `share_projection` attribute of `LinformerConfig`. - Linformer comes with 2 limitations: it cannot be used in the autoregressive case (see [this](https://github.com/lucidrains/linformer/issues/4#issuecomment-777781537)) - hence I removed the `is_decoder` logic, and it assumes a fixed sequence length. The latter is determined by the `seq_length` attribute of LinformerConfig, which is set to 512 by default. So if you provide `input_ids` and so on to the model, their length must be equal to the value of `seq_length`. Fixes the following issues: #4967 #5201 ## Before submitting - [ ] Did you write any new necessary tests? Yes I did, however I need some help to make sure all pass. Current status: for PyTorch, 30 are passing, 7 are failing. For Tensorflow, 26 passed, 9 failed. Note that there are currently no integration tests, as no weights were shared by the authors yet. However, they plan to release them as seen [here](https://github.com/pytorch/fairseq/issues/2795#issuecomment-720680448). - [ ] Also, it seems that the original authors relied on RoBERTa rather than BERT, so we might have to let `LinformerTokenizer` inherit from `RobertaTokenizer` rather than `BertTokenizer`. ## Who can review? @patil-suraj @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10587/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10587/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10587", "html_url": "https://github.com/huggingface/transformers/pull/10587", "diff_url": "https://github.com/huggingface/transformers/pull/10587.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10587.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10586
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10586/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10586/comments
https://api.github.com/repos/huggingface/transformers/issues/10586/events
https://github.com/huggingface/transformers/pull/10586
824,230,794
MDExOlB1bGxSZXF1ZXN0NTg2NDk3Mjc4
10,586
from_pretrained: check that the pretrained model is for the right model architecture
{ "login": "vimarshc", "id": 7055306, "node_id": "MDQ6VXNlcjcwNTUzMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/7055306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vimarshc", "html_url": "https://github.com/vimarshc", "followers_url": "https://api.github.com/users/vimarshc/followers", "following_url": "https://api.github.com/users/vimarshc/following{/other_user}", "gists_url": "https://api.github.com/users/vimarshc/gists{/gist_id}", "starred_url": "https://api.github.com/users/vimarshc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vimarshc/subscriptions", "organizations_url": "https://api.github.com/users/vimarshc/orgs", "repos_url": "https://api.github.com/users/vimarshc/repos", "events_url": "https://api.github.com/users/vimarshc/events{/privacy}", "received_events_url": "https://api.github.com/users/vimarshc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @vimarshc, thank you for opening this PR! Could you:\r\n- rebase your PR on the most recent master so that the failing tests don't fail anymore\r\n- run `make fixup` at the root of your repository to fix your code quality issue (More information related to this on step 5 of [this document](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests)", "Awesome!\r\n\r\nWould you like to attempt to add a test for this check? \r\n\r\nWe need to use tiny models so it's fast and I made the suggestions here:\r\nhttps://github.com/huggingface/transformers/issues/10293#issuecomment-784630105\r\n\r\nIf you're not sure how to do it please let me know and I will add a test.\r\n", "Hi @stas00, \r\nI'd like to add the tests myself if that's ok. I have to add the same checks for the `from_pretrained` for Tokenizer however it's not as straightforward. The Tokenizer's `from_pretrained` is written with some assumptions in mind and I'm not entirely sure where to add the check. Here's the `from_pretrained` method for Tokenizers. \r\n\r\n\r\nRegardless, I shall try to add the test for this assertion I've already added and the changes mentioned by @LysandreJik in the next 24 hours. \r\n", "OK, so your change works for the model and the config:\r\n```\r\nPYTHONPATH=src python -c 'from transformers import PegasusForConditionalGeneration; PegasusForConditionalGeneration.from_pretrained(\"patrickvonplaten/t5-tiny-random\")'\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/modeling_utils.py\", line 975, in from_pretrained\r\n config, model_kwargs = cls.config_class.from_pretrained(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/configuration_utils.py\", line 387, in from_pretrained\r\n assert (\r\nAssertionError: You tried to initiate a model of type 'pegasus' with a pretrained model of type 't5'\r\n```\r\nsame for:\r\n```\r\nPYTHONPATH=src python -c 'from transformers import PegasusConfig; PegasusConfig.from_pretrained(\"patrickvonplaten/t5-tiny-random\")'\r\n```\r\n\r\nAs you discovered - and I didn't know - the tokenizer doesn't seem to need the config file, so it doesn't look there is a way to check that the tokenizer being downloaded is of the right kind. I will ask.\r\n\r\nAnd yes, it's great if you can add the test - thank you.\r\n\r\nI restyled your PR to fit our style guide - we don't use `format` and you need to run the code through `make fixup` or `make style` (slower) before committing - otherwise CIs may fail. Which is what @LysandreJik was requesting.\r\nhttps://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests\r\n\r\nSo please `git pull` your branch to get my updates.", "Hi @stas00, \r\nThanks for the update. \r\nWill take a pull, add the test and go through the checklist before pushing the changes. \r\nWill try to push in a few hours. ", "I'm puzzled. why did you undo my fix? If you want to restore it, it was:\r\n\r\n```\r\n--- a/src/transformers/configuration_utils.py\r\n+++ b/src/transformers/configuration_utils.py\r\n@@ -384,6 +384,9 @@ class PretrainedConfig(object):\r\n\r\n \"\"\"\r\n config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n+ assert (\r\n+ config_dict[\"model_type\"] == cls.model_type\r\n+ ), f\"You tried to initiate a model of type '{cls.model_type}' with a pretrained model of type '{config_dict['model_type']}'\"\r\n return cls.from_dict(config_dict, **kwargs)\r\n\r\n @classmethod\r\n```", "Hi, \r\nApologies. \r\nI rebased my branch and assumed had to force push which deleted your changes. ", "Hi, \r\nI have added the tests. \r\nEverything seems to be working fine. \r\n\r\nHowever, I pushed after taking a pull from the master, and yet it's showing a merge conflict. Not sure how that got there. ", "you messed up your PR branch - so this PR now contains dozens of unrelated changes.\r\n\r\nYou can do a soft reset to the last good sha, e.g.:\r\n```\r\ngit reset --soft d70a770\r\ngit commit\r\ngit push -f\r\n```\r\n\r\nJust save somewhere your newly added test code first.\r\n", "I think you picked the wrong sha and ended up with an even worse situation. Try `d70a770` as I suggested.", "OK, so looking at the errors - need to solve 2 issues:\r\n\r\n### Issue 1.\r\n```\r\n assert (\r\n> config_dict[\"model_type\"] == cls.model_type\r\n ), f\"You tried to initiate a model of type '{cls.model_type}' with a pretrained model of type '{config_dict['model_type']}'\"\r\nE KeyError: 'model_type'\r\n```\r\nso some models don't have the `model_type` key. \r\n\r\n@vimarshc, I suppose you need to edit the code to skip this assert if we don't have the data.\r\n\r\nYou can verify that your change works with this test:\r\n```\r\npytest -sv tests/test_trainer.py::TrainerIntegrationTest -k test_early_stopping_callback\r\n```\r\n\r\nI looked at the config.json generated by this test and it's:\r\n```\r\n{\r\n \"a\": 0,\r\n \"architectures\": [\r\n \"RegressionPreTrainedModel\"\r\n ],\r\n \"b\": 0,\r\n \"double_output\": false,\r\n \"transformers_version\": \"4.4.0.dev0\"\r\n}\r\n```\r\nso far from being complete.\r\n\r\n### Issue 2\r\n\r\nThis one looks trickier:\r\n```\r\nE AssertionError: You tried to initiate a model of type 'blenderbot-small' with a pretrained model of type 'blenderbot'\r\n```\r\n\r\nWe will ask for help with this one.", "@patrickvonplaten, @patil-suraj - your help is needed here.\r\n\r\nBlenderbotSmall has an inconsistency. It declares its model type as \"blenderbot-small\":\r\n```\r\nsrc/transformers/models/auto/configuration_auto.py: (\"blenderbot-small\", BlenderbotSmallConfig),\r\nsrc/transformers/models/auto/configuration_auto.py: (\"blenderbot-small\", \"BlenderbotSmall\"),\r\nsrc/transformers/models/blenderbot_small/configuration_blenderbot_small.py: model_type = \"blenderbot-small\"\r\n```\r\nbut the pretrained models all use `model_type: blenderbot`: https://huggingface.co/facebook/blenderbot-90M/blob/main/config.json\r\n\r\nSo this new sanity check this PR is trying to add fails. \r\n```\r\n config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n> assert (\r\n config_dict[\"model_type\"] == cls.model_type\r\n ), f\"You tried to initiate a model of type '{cls.model_type}' with a pretrained model of type '{config_dict['model_type']}'\"\r\nE AssertionError: You tried to initiate a model of type 'blenderbot-small' with a pretrained model of type 'blenderbot'\r\n```\r\n\r\nWhat shall we do?\r\n\r\nIt's possible that that part of the config object needs to be re-designed, so that there is a top architecture/type and then perhaps sub-types? \r\n", "Hi @stas00 \r\nWill add the check you mentioned today. \r\n", "Looks good, @vimarshc \r\n\r\nSo we are down to one failing test:\r\n\r\n```\r\ntests/test_modeling_blenderbot_small.py::Blenderbot90MIntegrationTests::test_90_generation_from_short_input\r\n```", "I wonder if we could sort of cheat and do:\r\n\r\n```\r\nif not cls.model_type in config_dict[\"model_type\"]: assert ...\r\n```\r\n\r\nso this will check whether the main type matches as a substring of a sub-type. It's not a precise solution, but will probably catch the majority of mismatches.\r\n\r\nActually for t5/mt5 it's reversed. `model_type` are t5 and mt5, but both may have `T5ForConditionalGeneration` as `architecture`. \r\nhttps://huggingface.co/google/mt5-base/blob/main/config.json#L16 since `MT5ForConditionalGeneration` is a copy of `T5ForConditionalGeneration` with the only difference of having `model_type = \"mt5\"`\r\n\r\nSo I think this check could fail in some situations. In which case we could perhaps check if one is a subset of another in either direction?\r\n\r\n```\r\nif not (cls.model_type in config_dict[\"model_type\"] or config_dict[\"model_type\"] in cls.model_type): assert ...\r\n```\r\n\r\nSo this proposes a sort of fuzzy-match.\r\n\r\n", ">BlenderbotSmall has an inconsistency. It declares its model type as \"blenderbot-small\":\r\n\r\n@stas00 You are right. Before the BART refactor all `blenderbot` models shared the same model class, but the config was not updated after the refactor. The `model_type` on the hub should be `blenderbot-small`. I will fix that.", "I updated the config https://huggingface.co/facebook/blenderbot-90M/blob/main/config.json.\r\n\r\nAnd actually, there's a new version of `blenderbot-90M` , https://huggingface.co/facebook/blenderbot_small-90M\r\n\r\nIt's actually the same model, but with the proper name. The blenderbot small test uses `blenderbot-90M` which should be changed to use this new model.", "Hi @stas00, \r\nThe fuzzy match approach will not work for the case 'distilbert' vs 'bert'. ", "> Hi @stas00,\r\n> The fuzzy match approach will not work for the case 'distilbert' vs 'bert'.\r\n\r\nThat's an excellent counter-example! As I proposed that it might mostly work ;)\r\n\r\nBut it looks like your original solution will now work after @patil-suraj fixing.\r\n\r\nsome unrelated test is failing - I rebased this branch - let's see if it will be green now.", "> I updated the config https://huggingface.co/facebook/blenderbot-90M/blob/main/config.json.\r\n> \r\n> And actually, there's a new version of `blenderbot-90M` , https://huggingface.co/facebook/blenderbot_small-90M\r\n> \r\n> It's actually the same model, but with the proper name. The blenderbot small test uses `blenderbot-90M` which should be changed to use this new model.\r\n\r\nThank you, Suraj!\r\n\r\nSince it's sort of related to this PR, do you want to push the change in here, or do it in another PR?", "Oh bummer, we have 2 more in TF land:\r\n```\r\nFAILED tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_compile_tf_model\r\nFAILED tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_save_load\r\n```\r\nsame issue for both tests:\r\n```\r\nE AssertionError: You tried to initiate a model of type 'xlm' with a pretrained model of type 'flaubert'\r\n```\r\n\r\n@LysandreJik, who can help resolving this one? Thank you!\r\n\r\n\r\n\r\n", "Yes, I'll take a look as soon as possible!", "I fixed the tests related to FlauBERT. Flax test is a flaky test that @patrickvonplaten is working on solving, and should not block this PR.", "Thank you for taking care of this, @LysandreJik \r\n\r\nI suppose we will take care of potentially doing the same for the Tokenizer validation in another PR.", "With the tokenizer it'll likely be a bit more complex, as it is perfectly possible to have decoupled models/tokenizers, e.g., a BERT model and a different tokenizer like it is the case in [BERTweet (config.json)](https://huggingface.co/vinai/bertweet-base/blob/main/config.json).", "Indeed, I think this will require a change where there is a required `tokenizer_config.json` which identifies itself which arch it belongs to, so while it should be possible to mix a model and tokenizer from different architectures, this shouldn't fail with random misleading errors like:\r\n\r\n```\r\npython -c 'from transformers import BartTokenizer; BartTokenizer.from_pretrained(\"prajjwal1/bert-tiny\")'\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py\", line 1693, in from_pretrained\r\n raise EnvironmentError(msg)\r\nOSError: Can't load tokenizer for 'prajjwal1/bert-tiny'. Make sure that:\r\n\r\n- 'prajjwal1/bert-tiny' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'prajjwal1/bert-tiny' is the correct path to a directory containing relevant tokenizer files\r\n```\r\n\r\nbut to indicate to the user that they got either the wrong tokenizer class or the the tokenizer identifier, since the above error is invalid - it's the correct identifier \r\n\r\nAs can be seen from:\r\n```\r\npython -c 'from transformers import BertTokenizer; BertTokenizer.from_pretrained(\"prajjwal1/bert-tiny\")'\r\n```\r\nwhich works.\r\n\r\n(and it erroneously says \"model identifier\" and there is no model here, but that's an unrelated minor issue).\r\n\r\nAnd of course there are many other ways I have seen this mismatch to fail, usually a lot noisier when it's missing some file.\r\n", "@LysandreJik, I rebased this PR and it looks good. v4.4.0 is out so we can probably merge this one now.\r\n\r\nThank you.", "Indeed, this is great! Thanks a lot @vimarshc and @stas00 for working on this.", "So should I create a new issue for doing the same for the Tokenizers? I think it'd be much more complicated since we don't save any tokenizer data at the moment that puts the tokenizer in any category/architecture.", "Hi, \r\nThanks, @stas00 for providing the guidance to close this issue. This is my first contribution to transformers so you can imagine my excitement. :D \r\nI understand that a similar change for Tokenizer will be a bit more complicated. Would love to take a shot at fixing that as well. :) " ]
1,615
1,616
1,616
CONTRIBUTOR
null
# What does this PR do? Adding Checks to the from_pretrained workflow to check the model name passed belongs to the model being initiated. Same checks need to be added for Tokenizer. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/10293 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10586/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10586/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10586", "html_url": "https://github.com/huggingface/transformers/pull/10586", "diff_url": "https://github.com/huggingface/transformers/pull/10586.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10586.patch", "merged_at": 1616086302000 }
https://api.github.com/repos/huggingface/transformers/issues/10585
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10585/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10585/comments
https://api.github.com/repos/huggingface/transformers/issues/10585/events
https://github.com/huggingface/transformers/pull/10585
824,203,225
MDExOlB1bGxSZXF1ZXN0NTg2NDc0MjA0
10,585
[run_seq2seq] fix nltk lookup
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm going to merge this since the issue could be interfering with other PRs." ]
1,615
1,615
1,615
CONTRIBUTOR
null
Hmm, CI crashes every so often on ``` try: nltk.data.find("tokenizers/punkt") except LookupError: ``` introduced in this PR: https://github.com/huggingface/transformers/pull/10407 https://app.circleci.com/pipelines/github/huggingface/transformers/20635/workflows/989fde0b-e543-4620-9d9a-f213ad53dd9b/jobs/176742 ``` __________________ ERROR collecting examples/test_examples.py __________________ examples/test_examples.py:51: in <module> import run_seq2seq examples/seq2seq/run_seq2seq.py:54: in <module> nltk.data.find("tokenizers/punkt") ../.local/lib/python3.6/site-packages/nltk/data.py:539: in find return FileSystemPathPointer(p) ../.local/lib/python3.6/site-packages/nltk/compat.py:41: in _decorator return init_func(*args, **kwargs) ../.local/lib/python3.6/site-packages/nltk/data.py:315: in __init__ raise IOError("No such file or directory: %r" % _path) E OSError: No such file or directory: '/home/circleci/nltk_data/tokenizers/punkt/PY3' ``` which is odd, re-running the job fixed the problem. So trying to mend it with: ``` try: nltk.data.find("tokenizers/punkt") except (LookupError, OSError): ``` not sure why the Exception was different here. @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10585/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10585", "html_url": "https://github.com/huggingface/transformers/pull/10585", "diff_url": "https://github.com/huggingface/transformers/pull/10585.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10585.patch", "merged_at": 1615183799000 }
https://api.github.com/repos/huggingface/transformers/issues/10584
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10584/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10584/comments
https://api.github.com/repos/huggingface/transformers/issues/10584/events
https://github.com/huggingface/transformers/pull/10584
824,198,842
MDExOlB1bGxSZXF1ZXN0NTg2NDcwNTI4
10,584
[examples tests] various fixes
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's still hanging waiting for something. If I'm not mistaken it's hanging in saving the model. but there is another thread that might be relevant:\r\n```\r\nThread 0x00007f6dd984b740 (most recent call first):\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/_utils.py\", line 45 in _type\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/__init__.py\", line 496 in type\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/storage.py\", line 72 in cpu\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/serialization.py\", line 488 in _save\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/serialization.py\", line 372 in save\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/src/transformers/modeling_utils.py\", line 835 in save_pretrained\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/src/transformers/trainer.py\", line 1535 in _save\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/src/transformers/trainer.py\", line 1496 in save_model\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/src/transformers/trainer.py\", line 1204 in _save_checkpoint\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/src/transformers/trainer.py\", line 1179 in _maybe_log_save_evaluate\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/src/transformers/trainer.py\", line 1088 in train\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/examples/seq2seq/run_seq2seq.py\", line 590 in main\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/examples/seq2seq/run_seq2seq.py\", line 654 in <module>\r\n\r\nThread 0x00007f59389d2740 (most recent call first):\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/src/transformers/trainer.py\", line 1552 in store_flos\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/src/transformers/trainer.py\", line 1132 in train\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/examples/seq2seq/run_seq2seq.py\", line 590 in main\r\n File \"/mnt/nvme1/code/huggingface/transformers-test-fix/examples/seq2seq/run_seq2seq.py\", line 654 in <module>\r\n```\r\nRight after:\r\n```\r\nSaving model checkpoint to /tmp/tmp4rujlsox/checkpoint-1 'epoch': 1.0}\r\nstderr: Configuration saved in /tmp/tmp4rujlsox/checkpoint-1/config.json\r\n```\r\n\r\nYou can add:\r\n\r\n```\r\n--- a/examples/seq2seq/run_seq2seq.py\r\n+++ b/examples/seq2seq/run_seq2seq.py\r\n@@ -649,4 +649,6 @@ def _mp_fn(index):\r\n\r\n\r\n if __name__ == \"__main__\":\r\n+ import faulthandler\r\n+ faulthandler.dump_traceback_later(20, repeat=True)\r\n main()\r\n```\r\n\r\nTo get the hanging bt.\r\n\r\nAnd it's best debug outside of pytest, otherwise some bt gets messed up it seems.", "Oh? It was running to completion on my side after the removal. So let's merge as is and I will look into the failure when I have time." ]
1,615
1,615
1,615
CONTRIBUTOR
null
This PR is fixing slow examples tests that currently fail on scheduled CI 2 more tests will be fixed by https://github.com/huggingface/transformers/pull/10551 This PR: Sharded DDP issues: * fixes fully sharded ddp enum - and the corresponding tests * 2 sharded ddp tests currently hang with master fairscale - add skip until this is sorted out - didn't want to step on @sgugger's toes - so for now just skipping Tests: * changes a large group of tests to check loss is not nan * fix `test_run_seq2seq_slow` test - missed by my PR https://github.com/huggingface/transformers/pull/10428 - make it more resilient - was failing quality-wise on multi-gpu * then we have an issue with apex - once run in a worker directly it breaks other tests running directly in the same pytest worker: ``` # XXX: apex breaks the trainer if it's run twice e.g. run_seq2seq.main() from the same # program and it breaks other tests that run from the same pytest worker, therefore until this is # sorted out it must be run only in an external program, that is distributed=True in this # test and only under one or more gpus - if we want cpu will need to make a special test # # specifically to the problem traced it to self.optimizer.step() - if it's run 2nd time via # 2nd main() call it botches the future eval. ``` I'm not quite sure what happens but I think no-one will run into this in a normal situation, here we basically end up running: ``` main() main() ``` in the same process. We have some internal state that doesn't get reset. So as I wrote above I used a workaround of running the apex integration test in a separate process so that it doesn't affect the rest of the test suite. Of course if you have ideas on what the problem is and how to fix it I'm all ears. It's very simple to reproduce it: ``` def test_run_seq2seq_apex(self): self.run_seq2seq_quick(distributed=True, extra_args_str="--fp16 --fp16_backend=apex") # test 2nd time - was getting eval_loss': nan' # to reproduce the problem set distributed=False self.run_seq2seq_quick(distributed=True, extra_args_str="--fp16 --fp16_backend=apex") ``` I can move it out into a separate PR if it's too much for one. @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10584/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10584", "html_url": "https://github.com/huggingface/transformers/pull/10584", "diff_url": "https://github.com/huggingface/transformers/pull/10584.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10584.patch", "merged_at": 1615228124000 }
https://api.github.com/repos/huggingface/transformers/issues/10583
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10583/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10583/comments
https://api.github.com/repos/huggingface/transformers/issues/10583/events
https://github.com/huggingface/transformers/pull/10583
824,183,924
MDExOlB1bGxSZXF1ZXN0NTg2NDU3NjA2
10,583
[trainer] fix double wrapping + test
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
We have an issue with ``` trainer.train() trainer.train() ``` under any environment that requires model wrapping - we currently get the wrapping multiple times - and things may kind of work - but most of the time it breaks badly - thanks to apex for complaining noisily when it's being wrapped second time. i.e. we get things like `DataParallel(DataParallel(model))` This PR fixes this problem and adds a test. @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10583/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10583", "html_url": "https://github.com/huggingface/transformers/pull/10583", "diff_url": "https://github.com/huggingface/transformers/pull/10583.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10583.patch", "merged_at": 1615216556000 }
https://api.github.com/repos/huggingface/transformers/issues/10582
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10582/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10582/comments
https://api.github.com/repos/huggingface/transformers/issues/10582/events
https://github.com/huggingface/transformers/pull/10582
824,133,292
MDExOlB1bGxSZXF1ZXN0NTg2NDE0NTcx
10,582
wrong model used for BART Summarization example
{ "login": "orena1", "id": 8983713, "node_id": "MDQ6VXNlcjg5ODM3MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/8983713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orena1", "html_url": "https://github.com/orena1", "followers_url": "https://api.github.com/users/orena1/followers", "following_url": "https://api.github.com/users/orena1/following{/other_user}", "gists_url": "https://api.github.com/users/orena1/gists{/gist_id}", "starred_url": "https://api.github.com/users/orena1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orena1/subscriptions", "organizations_url": "https://api.github.com/users/orena1/orgs", "repos_url": "https://api.github.com/users/orena1/repos", "events_url": "https://api.github.com/users/orena1/events{/privacy}", "received_events_url": "https://api.github.com/users/orena1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
I'm pretty sure that `bart-large` was not trained for summarization, I replaced it with `bart-large-cnn` which is a model that was fine-tuned for summarization # What does this PR do? replace model used in Summarization example
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10582/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10582", "html_url": "https://github.com/huggingface/transformers/pull/10582", "diff_url": "https://github.com/huggingface/transformers/pull/10582.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10582.patch", "merged_at": 1615198506000 }
https://api.github.com/repos/huggingface/transformers/issues/10581
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10581/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10581/comments
https://api.github.com/repos/huggingface/transformers/issues/10581/events
https://github.com/huggingface/transformers/pull/10581
824,079,080
MDExOlB1bGxSZXF1ZXN0NTg2MzcxNDgw
10,581
wav2vec2: support datasets other than LibriSpeech
{ "login": "elgeish", "id": 6879673, "node_id": "MDQ6VXNlcjY4Nzk2NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elgeish", "html_url": "https://github.com/elgeish", "followers_url": "https://api.github.com/users/elgeish/followers", "following_url": "https://api.github.com/users/elgeish/following{/other_user}", "gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}", "starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elgeish/subscriptions", "organizations_url": "https://api.github.com/users/elgeish/orgs", "repos_url": "https://api.github.com/users/elgeish/repos", "events_url": "https://api.github.com/users/elgeish/events{/privacy}", "received_events_url": "https://api.github.com/users/elgeish/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Thanks to a fix for `timit_asr` that @patrickvonplaten made, now I have some good results using `wav2vec2-base`:\r\n\r\n<img width=\"1050\" alt=\"timit_asr_pr_ 10581\" src=\"https://user-images.githubusercontent.com/6879673/110573935-0435e480-8111-11eb-8e0e-845af4e2eab7.png\">\r\n\r\nI'm running one for `arabic_speech_corpus` using `wav2vec2-base` as well. @patrickvonplaten let me know when you have the rest of configs for https://huggingface.co/facebook/wav2vec2-large-xlsr uploaded so I can try it as well (or a workaround). Thanks!", "Thanks for adding this functionality! Your script worked succesfully when I fine-tuned `wav2vec2-base` , `xlsr` (https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr_53_56k.pt) and `wa2vec2-large-100k` (multilingual Large Model from https://github.com/facebookresearch/voxpopuli#pre-trained-models pre-trained on VoxPopuli dataset) on TIMIT dataset. If fine-tuning on some another custom dataset, is it enough to set `--orthography` to `timit` in `run_asr.py` if the transcriptions are lowercased and `librispeech` if they are uppercased?", "> Thanks for adding this functionality! Your script worked succesfully when I fine-tuned `wav2vec2-base` , `xlsr` (https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr_53_56k.pt) and `wa2vec2-large-100k` (multilingual Large Model from https://github.com/facebookresearch/voxpopuli#pre-trained-models pre-trained on VoxPopuli dataset) on TIMIT dataset.\r\n\r\nThanks, @Getmany1 - you can do me a favor and run it with `arabic_speech_corpus` dataset and `--target_feature_extractor_sampling_rate --orthography buckwalter` on `xlsr` to verify it works with extended vocab. Unfortunately I can't fit `xlsr` on my machine.\r\n\r\n> If fine-tuning on some another custom dataset, is it enough to set `--orthography` to `timit` in `run_asr.py` if the transcriptions are lowercased and `librispeech` if they are uppercased?\r\nFor the most part, yes! Run it with `--verbose_logging` to see how the orthography rules pre-processed the text. Keep us posted!", "`XLSR `model I used didn't work with this setup: the training loss is _nan_ when I tried to fine-tune on Arabic corpus. If I got it correctly, with `--orthography buckwalter` you modify the tokenizer only. However, if you load e.g. \r\n`model = Wav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-large-xlsr-53\")`\r\nand check it structure\r\n`print(model.state_dict)`\r\nyou'll see that the last layer of the network is the LM head with the default vocabulary size:\r\n`(lm_head): Linear(in_features=1024, out_features=32, bias=True)`\r\nIf I understand correctly, you need to convert the model manually if you want to have a letter vocabulary different from english.\r\nI converted the fairseq xlsr checkpoint using this script\r\n`transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py`\r\ntogether with \"custom\", Swedish letter dictionary, and then succeeded to fine-tune it on a Swedish corpus.\r\nI guess you need to do the same for Arabic in order to have proper LM head on top of the model.", "Yes, thanks! I missed that step (which is similar to `PreTrainedModel.resize_token_embeddings()`). I'm adding a method for that: `Wav2Vec2ForCTC.resize_lm_head()` which simply calls `PreTrainedModel.model._get_resized_lm_head()` and updates model config. The LM head looks good when inspecting the return value. I'll add some unit tests as well.\r\n\r\nNow I'm fine-tuning it on `wav2vec2-base` but I'm not expecting great results given the phonetic differences with Arabic. If you can try it again with `xlsr`, I'd appreciate it!", "Thanks @elgeish, the method seems to work correctly. Now the loss is not `nan` any more during the fine-tuning of the `xlsr` model. Training loss:\r\n\r\n```\r\n{'loss': 584.0067, 'learning_rate': 8.333333333333333e-05, 'epoch': 0.28}\r\n{'loss': 291.7098, 'learning_rate': 0.00016666666666666666, 'epoch': 0.55}\r\n```\r\nValidation loss:\r\n```\r\n{'eval_loss': 374.4364929199219, 'eval_wer': 1.0, 'eval_runtime': 22.3424, 'eval_samples_per_second': 4.476, 'epoch': 0.28}\r\n{'eval_loss': 374.20855712890625, 'eval_wer': 1.0, 'eval_runtime': 22.8504, 'eval_samples_per_second': 4.376, 'epoch': 0.55}\r\n```\r\nPredictions after epoch 0.55:\r\n```\r\n03/12/2021 12:07:54 - DEBUG - __main__ - reference: \"wayaquwlu lEulamA'u <in~ahu min gayri lmuraj~aHi >an tuTaw~ira lbaktiyryA lmuEdiyapu muqAwamapan Did~a lEilAji ljadiyd >al~a*iy >aSbaHa mutAHan biAlfiEl fiy $akli marhamin lil>amrADi ljildiy~api\"\r\n03/12/2021 12:07:54 - DEBUG - __main__ - prediction: \"\"\r\n03/12/2021 12:07:54 - DEBUG - __main__ - reference: \"wayumkinuka lHuSuwlu EalY taTbiyqAtin lilt~adriybAti l>asAsiy~api maj~Anan\"\r\n03/12/2021 12:07:54 - DEBUG - __main__ - prediction: \"\"\r\n```\r\n\r\nUnfortunately, after that I always run out of memory in Colab. The recordings in the Arabic corpus are probably very long. As a result even with `batch_size=1` 16Gb of GPU memory is not enough:\r\n```\r\nRuntimeError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 15.90 GiB total capacity; 14.80 GiB already allocated; 87.75 MiB free; 14.94 GiB reserved in total by PyTorch)\r\n 2% 1050/54390 [07:16<6:09:57, 2.40it/s]\r\n```\r\nIf you have more than 16Gb of GPU memory available, I can share the `xlsr` model to try it out on your machine.", "I added a `--max_duration_in_seconds` filter. I'm seeing ok results now fine-tuning `wav2vec2-base` after 500 steps, for example:\r\n```\r\nreference: \"wamin tilka ls~ilaE >al$~Ayu lS~iyniy~u wAlwaraqu wAlbAruwdu wAlbuwSilapu\"\r\nprediction: \"wamin tiloka Als~ila>a$~aAyu AS~iyniy~u walowaraqu waAlobaAruwdu waAlobuwSilapu\"\r\n```\r\nI've also started fine-tuning [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) after adding missing files locally on my machine.\r\n\r\nFor future PRs, I'm thinking of:\r\n* Supporting other languages and examples from [Patrick's awesome blog post](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2), which I need to read\r\n* CER and WER transformations for other languages (e.g., ignoring tashkil in Arabic)\r\n* Supporting [Lhotse](https://lhotse.readthedocs.io/en/latest/corpus.html) datasets, which provide a ton of speech-specific functionality\r\n\r\nBefore adding UTs, I want to check with @patrickvonplaten the code here is on the right track.\r\n@Getmany1 you've been super helpful, thank you!", "> Hey @elgeish,\r\n> \r\n> Thanks a lot for your PR! Sorry that I only reviewed it now. In general I like the changes you have made to `run_wav2vec2.py` :-)\r\n> \r\n> A couple of things, I'd like to change:\r\n> \r\n> * Remove the change from `modeling_wav2vec2.py`. I don't really want a resize lm head method in the model there -> it's too much of an edge case IMO. You can simply first instantiate the processor then use the tokenizer length to load the model with `Wav2Vec2ForCTC.from_pretrained(model_path, vocab_size=len(tokenizer))`. This way we don't need a `resize_lm_head()` method.\r\n> * Can you give me more context on the `buckwalter.json` file?\r\n> * Can you also add more text to the README.md that explains a bit how to use the `run_wav2vec2.py` script? E.g. what does the orthography class do, what is the buckwalter vocab?\r\n\r\nThank you! I responded inline. I'll update `README.md` as well. I think you mean `run_asr.py`, no?", "Great the PR looks good to me now! Thanks a lot for doing this :-) The failures seem unrelated, so I'll rerun the tests again to make the CI happy" ]
1,615
1,616
1,616
CONTRIBUTOR
null
# What does this PR do? Building on #10145, I'm adding support for the two other speech datasets (besides LibriSpeech) for ASR at the time of writing (`timit_asr` and `arabic_speech_corpus`), which require the following: * Custom validation split name * On-the-fly resampling support to target feature extractor's sampling rate (via `librosa` -- see `requirements.txt`) * Max duration (in seconds) filter to remove outliers (which may crash GPU training when running OOM) * Verbose logging to help debug custom datasets, including reverse transliteration (via `lang-trans` -- see `requirements.txt`) * Pre-processing: tokenization and text normalization using orthography rules * Casing (`do_lower_case`) * Custom vocab for tokens used in orthography (e.g., Buckwalter transliteration for Arabic) * Custom word delimiter token (when the default, `"|"`, is used in orthography) * Transformations similar to those in `jiwer` to normalize transcripts for training * Removing special words (e.g., "sil" which can be used to indicate silence) * Translation table (e.g., "-" -> " " to break compounds like "quarter-century-old") * Cleaning up characters not in vocab (after applying the rules above) Arabic model: https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-arabic TIMIT models: https://huggingface.co/elgeish/wav2vec2-base-timit-asr and https://huggingface.co/elgeish/wav2vec2-large-lv60-timit-asr ## Who can review? @patrickvonplaten @sgugger @LysandreJik @patil-suraj @SeanNaren
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10581/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10581/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10581", "html_url": "https://github.com/huggingface/transformers/pull/10581", "diff_url": "https://github.com/huggingface/transformers/pull/10581.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10581.patch", "merged_at": 1616052026000 }
https://api.github.com/repos/huggingface/transformers/issues/10580
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10580/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10580/comments
https://api.github.com/repos/huggingface/transformers/issues/10580/events
https://github.com/huggingface/transformers/issues/10580
824,032,279
MDU6SXNzdWU4MjQwMzIyNzk=
10,580
Issue when customizing loss in Trainer
{ "login": "LedaguenelArthur", "id": 73159756, "node_id": "MDQ6VXNlcjczMTU5NzU2", "avatar_url": "https://avatars.githubusercontent.com/u/73159756?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LedaguenelArthur", "html_url": "https://github.com/LedaguenelArthur", "followers_url": "https://api.github.com/users/LedaguenelArthur/followers", "following_url": "https://api.github.com/users/LedaguenelArthur/following{/other_user}", "gists_url": "https://api.github.com/users/LedaguenelArthur/gists{/gist_id}", "starred_url": "https://api.github.com/users/LedaguenelArthur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LedaguenelArthur/subscriptions", "organizations_url": "https://api.github.com/users/LedaguenelArthur/orgs", "repos_url": "https://api.github.com/users/LedaguenelArthur/repos", "events_url": "https://api.github.com/users/LedaguenelArthur/events{/privacy}", "received_events_url": "https://api.github.com/users/LedaguenelArthur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think you may be experiencing a bug that was fixed since then (I would need the whole error message to be sure) so before we dive further, could you see if an [install from source](https://huggingface.co/transformers/installation.html#installing-from-source) solves your problem?", "Hi @sgugger,\r\n\r\nThank you very much for the quick answer, I tested with the installation from source and he it worked !" ]
1,615
1,615
1,615
NONE
null
Hi everyone, I am a student and therefore not yet very familiar with the way issues report work on git, so I aplogize in advance if this is not the proper place to post this message. I'm trying to customize the loss to use a weighted CrossEntropyLoss, I browsed the issues reports and saw that this matter was already mentionned and a solution was brought by the developpers (I think it was @sgugger) of the Transformers library, I tried to follow their code snipets as much as possible but always ended up with the same error. I'm working with the last version of Transformers and this is my code: ``` config = AutoConfig.from_pretrained("bert-base-cased", num_labels=2, finetuning_task="SST-2") # Test with modified trainer for weighted CrossEntropyLoss model = AutoModelForSequenceClassification.from_pretrained( "dmis-lab/biobert-base-cased-v1.1", from_tf=False, config=config) from torch import FloatTensor classDistribution_raw = [97, 3] classDistribution = [0.8, 0.2] normedWeights = [1 - (x / sum(classDistribution)) for x in classDistribution] normedWeights = FloatTensor(normedWeights).cuda() from torch.nn import CrossEntropyLoss class MyTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): if "labels" in inputs: labels = inputs.pop("labels") outputs = model(**inputs) logits = outputs.logits loss_function = CrossEntropyLoss(weight = normedWeights) if self.args.past_index >= 0: self._past = outputs[self.args.past_index] if labels is not None: loss = loss_function(logits, labels) else: # We don't use .loss here since the model may return tuples instead of ModelOutput. loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0] return (loss, outputs) if return_outputs else loss trainer = MyTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics_fn, tokenizer=tokenizer, ) ``` And this is the error I keep getting: `'NoneType' object has no attribute 'detach'` I'm probably doing something wrong but I can't understand what. Thanks in advance for your answers, I stay availble if u need any more details about my set up or my code, Best regards, Arthur Ledaguenel
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10580/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10579
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10579/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10579/comments
https://api.github.com/repos/huggingface/transformers/issues/10579/events
https://github.com/huggingface/transformers/issues/10579
824,002,488
MDU6SXNzdWU4MjQwMDI0ODg=
10,579
request about deepspeed tutorial
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "@dorost1234, thank you for the kind words. I'm glad to hear it was useful.\r\n\r\nIn general to answer the bulk of your questions - you will find the full documentation here:\r\nhttps://huggingface.co/transformers/master/main_classes/trainer.html#deepspeed\r\n\r\nPlease let me know if you still have any question after reading it.\r\n\r\n> [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 131072.0, reducing to 65536.0\r\n\r\nI shall document this. This is just a warning that DeepSpeed prints when it delays the optimizer stepping due to fp16 dynamic scaling. I definitely want to document that since it's alarming and hard to understand. and how to get the optimizer kick on the first step.\r\n\r\nAlso I started working on a notebook, which you can try - should work on jupyter or colab:\r\nhttps://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb\r\nPlease let me know what you'd like to be added to it.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
Dear @stas00 You have created this great tutorial here that without you it was really very hard near impossible to be able to train these large models, thank you so much https://github.com/huggingface/transformers/issues/8771 Do you mind updating your comment, including how numbers would change if you add distributed training and deepspeed and how the command should be in this case? so to further speedup deepspeed on multiple GPUs I also get this message with deepspeed with transformer 4.3.3 running the tutorial: [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 131072.0, reducing to 65536.0 could you tell me if this shows any issue in training? sorry I am very unfamiliar with deepspeed. One more question, in tutorial you did not use --fp16, could you add a few comments if we can use it with deepspeed? this is such a great toturial and would be great if we could have all info in one place. thank you so much for the hard work and a lot of help
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10579/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10579/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10578
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10578/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10578/comments
https://api.github.com/repos/huggingface/transformers/issues/10578/events
https://github.com/huggingface/transformers/issues/10578
823,987,220
MDU6SXNzdWU4MjM5ODcyMjA=
10,578
Why HFArgumentParser.parse_dict(TrainerArguments) return tuple instead of dict?
{ "login": "alierenak", "id": 48334667, "node_id": "MDQ6VXNlcjQ4MzM0NjY3", "avatar_url": "https://avatars.githubusercontent.com/u/48334667?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alierenak", "html_url": "https://github.com/alierenak", "followers_url": "https://api.github.com/users/alierenak/followers", "following_url": "https://api.github.com/users/alierenak/following{/other_user}", "gists_url": "https://api.github.com/users/alierenak/gists{/gist_id}", "starred_url": "https://api.github.com/users/alierenak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alierenak/subscriptions", "organizations_url": "https://api.github.com/users/alierenak/orgs", "repos_url": "https://api.github.com/users/alierenak/repos", "events_url": "https://api.github.com/users/alierenak/events{/privacy}", "received_events_url": "https://api.github.com/users/alierenak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hi @akalieren \r\n\r\nthe `parse_dict` or `parse_args_into_dataclasses` methods always return a `tuple` of parsed arguments for each `dataclass` that was used to initialize `HfArgumentParser`. Here you're initializing it with just `TrainingArguments` so `parse_dict` returns a `tuple` of length 1.\r\n\r\nHope this helps.", "Thanks for quick response @patil-suraj,\r\n\r\nI am understanding it returns `tuple` of initializing given data classes; however, my question is that is there a reason why it returns `tuple` instead of just `*outputs`. \r\n\r\nI hope this is not kind of misunderstanding, and grateful about your time.\r\nThanks.\r\n\r\n", "`return *outputs` is not a valid python statement. Also `outputs` here is a list and it's just a good python practice to return return `tuple` instead of `list` when returning multiple values.", "I tried it and realized that you are right. Thanks a lot for your interest and kind responses.\r\n\r\nI am closing this issue. Wish healthy days." ]
1,615
1,615
1,615
NONE
null
I guess it is not certainly bug however I could not almost understand why` HfArgumentParser.parsedict() `return `(*outputs,)`. As can be seen in [docs](https://huggingface.co/transformers/_modules/transformers/hf_argparser.html#HfArgumentParser ) I am trying to fine-tune BERT for Token Classification using Trainer class and my aim is to turn argparse object to TrainingArguments. I converted ArgumentParser object to dictionary then using HfArgumentParser.parsedict() turn it to TrainingArguments object. However I realized that ` HfArgumentParser.parsedict() ` returns `(TrainingArguments, )` tuple which cause following error in Trainer initializer. ```console Traceback (most recent call last): File "transformers_ner.py", line 344, in <module> finetune_model(args) File "transformers_ner.py", line 308, in finetune_model args=train_args File "/home/akali/.local/lib/python3.6/site-packages/transformers/trainer.py", line 237, in __init__ set_seed(self.args.seed) AttributeError: 'tuple' object has no attribute 'seed' ``` I get train_args by: ```python #ย ArgParser --> Training Arguments HFParser = HfArgumentParser(TrainingArguments) train_args = HFParser.parse_dict(args) ``` I know I can get TrainingArguments object by doing` train_args[0] `however, isn't it meaningless that `HfArgumentParser.parsedict()` return tuple instead of directly TrainingArguments. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10578/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10577
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10577/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10577/comments
https://api.github.com/repos/huggingface/transformers/issues/10577/events
https://github.com/huggingface/transformers/issues/10577
823,986,008
MDU6SXNzdWU4MjM5ODYwMDg=
10,577
seq2seq example with T5 does not run due to issue with loading tokernizer
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "solved with installing sentencepiece, I appreciate adding a file mentioning requirements.txt thanks ", "Hi @dorost1234 ,\r\nGlad you resolved the issue. Your `transformers` version is old, we have now added the `sentencepiece` dependency in `requirements.txt`.\r\nhttps://github.com/huggingface/transformers/blob/master/examples/seq2seq/requirements.txt#L2" ]
1,615
1,615
1,615
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten, @patil-suraj ## Information Hi I am trying to run run_seq2seq.py example on mt5 model ` python run_seq2seq.py --model_name_or_path google/mt5-small --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --tokenizer_name google/mt5-small ` getting this error: ``` Traceback (most recent call last): File "run_seq2seq.py", line 539, in <module> main() File "run_seq2seq.py", line 309, in main use_auth_token=True if model_args.use_auth_token else None, File "/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 379, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1789, in from_pretrained resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs File "/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1860, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 147, in __init__ **kwargs, File "/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 103, in __init__ "Couldn't instantiate the backend tokenizer from one of: " ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one. ``` thank you for your help
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10577/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10576
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10576/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10576/comments
https://api.github.com/repos/huggingface/transformers/issues/10576/events
https://github.com/huggingface/transformers/issues/10576
823,984,867
MDU6SXNzdWU4MjM5ODQ4Njc=
10,576
Movement pruning for DistilGPT2 - pre_trained model, issue while using dynamic_quantization
{ "login": "mriganktiwari", "id": 21966929, "node_id": "MDQ6VXNlcjIxOTY2OTI5", "avatar_url": "https://avatars.githubusercontent.com/u/21966929?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mriganktiwari", "html_url": "https://github.com/mriganktiwari", "followers_url": "https://api.github.com/users/mriganktiwari/followers", "following_url": "https://api.github.com/users/mriganktiwari/following{/other_user}", "gists_url": "https://api.github.com/users/mriganktiwari/gists{/gist_id}", "starred_url": "https://api.github.com/users/mriganktiwari/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mriganktiwari/subscriptions", "organizations_url": "https://api.github.com/users/mriganktiwari/orgs", "repos_url": "https://api.github.com/users/mriganktiwari/repos", "events_url": "https://api.github.com/users/mriganktiwari/events{/privacy}", "received_events_url": "https://api.github.com/users/mriganktiwari/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
## Environment info - `transformers` version: 4.3.3 - Platform: Ubuntu 20.04 - Python version: 3.8.8 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: False - Using distributed or parallel set-up in script?: False ### Who can help [@VictorSanh](https://github.com/VictorSanh) ## Information I am following through the Saving PruneBERT [notebook](https://github.com/huggingface/transformers/blob/b11386e158e86e62d4041eabd86d044cd1695737/examples/movement-pruning/Saving_PruneBERT.ipynb) from the *examples/movement-pruning/* directory, to have a pruned and quantized model for DistilGPT2. In cell 4: ``` # Elementary representation: we decompose the quantized tensors into (scale, zero_point, int_repr). # See https://pytorch.org/docs/stable/quantization.html # We further leverage the fact that int_repr is sparse matrix to optimize the storage: we decompose int_repr into # its CSR representation (data, indptr, indices). elementary_qtz_st = {} for name, param in qtz_st.items(): if "dtype" not in name and param.is_quantized: print("Decompose quantization for", name) # We need to extract the scale, the zero_point and the int_repr for the quantized tensor and modules scale = param.q_scale() # torch.tensor(1,) - float32 zero_point = param.q_zero_point() # torch.tensor(1,) - int32 elementary_qtz_st[f"{name}.scale"] = scale elementary_qtz_st[f"{name}.zero_point"] = zero_point # We assume the int_repr is sparse and compute its CSR representation # Only the FCs in the encoder are actually sparse int_repr = param.int_repr() # torch.tensor(nb_rows, nb_columns) - int8 int_repr_cs = sparse.csr_matrix(int_repr) # scipy.sparse.csr.csr_matrix elementary_qtz_st[f"{name}.int_repr.data"] = int_repr_cs.data # np.array int8 elementary_qtz_st[f"{name}.int_repr.indptr"] = int_repr_cs.indptr # np.array int32 assert max(int_repr_cs.indices) < 65535 # If not, we shall fall back to int32 elementary_qtz_st[f"{name}.int_repr.indices"] = np.uint16(int_repr_cs.indices) # np.array uint16 elementary_qtz_st[f"{name}.int_repr.shape"] = int_repr_cs.shape # tuple(int, int) else: elementary_qtz_st[name] = param ``` my model throws the below error: ```AttributeError: 'NoneType' object has no attribute 'is_quantized'``` which on digging into quantizing step in cell 2, shows a significant difference between the used BERT and DistilGPT2 (the one am using) quantized version: 1. The BERT quantized first few layers look like: ``` bert.embeddings.position_ids bert.embeddings.word_embeddings.weight bert.embeddings.position_embeddings.weight bert.embeddings.token_type_embeddings.weight bert.embeddings.LayerNorm.weight bert.embeddings.LayerNorm.bias bert.encoder.layer.0.attention.self.query.scale bert.encoder.layer.0.attention.self.query.zero_point bert.encoder.layer.0.attention.self.query._packed_params.weight bert.encoder.layer.0.attention.self.query._packed_params.bias bert.encoder.layer.0.attention.self.key.scale bert.encoder.layer.0.attention.self.key.zero_point bert.encoder.layer.0.attention.self.key._packed_params.weight bert.encoder.layer.0.attention.self.key._packed_params.bias bert.encoder.layer.0.attention.self.value.scale bert.encoder.layer.0.attention.self.value.zero_point bert.encoder.layer.0.attention.self.value._packed_params.weight bert.encoder.layer.0.attention.self.value._packed_params.bias ``` 2. The quantized DistilGPT2 first few layers look like: ``` transformer.wte.weight transformer.wpe.weight transformer.h.0.ln_1.weight transformer.h.0.ln_1.bias transformer.h.0.attn.bias transformer.h.0.attn.masked_bias transformer.h.0.attn.c_attn.weight transformer.h.0.attn.c_attn.bias transformer.h.0.attn.c_proj.weight transformer.h.0.attn.c_proj.bias transformer.h.0.ln_2.weight transformer.h.0.ln_2.bias ``` 3. As you would notice, there is a clear difference in the way layers are formed after quantization a) BERT has ```.scale``` and ```.zero_point``` added to every layer after embeddings, whereas the DistilGPT2 layers do not get these 2 extras. b) any ```.weight``` and ```.bias``` are converted to ```._packed_params.weight``` and ```._packed_params.bias``` respectively. 4. I beleive this is why when processing the cell 4: a) It does not even go to all layers that are missing ```._packed_params``` and just tries to process the last layer which is ``` lm_head.scale lm_head.zero_point lm_head._packed_params.weight lm_head._packed_params.bias ``` b) Where it fails with the error mentioned just before point 1. ## To reproduce Steps to reproduce the behavior: 1. Clone `transformers` and follow the steps to install the `movement-pruning` example 2. Upgrade torch to v1.4 3. Try to run the `Saving_PruneBERT.ipynb` notebook with 1 change, in the cell 2 instantiate the model class with below line ```model = AutoModelForCausalLM.from_pretrained('distilgpt2')```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10576/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10575
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10575/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10575/comments
https://api.github.com/repos/huggingface/transformers/issues/10575/events
https://github.com/huggingface/transformers/issues/10575
823,979,670
MDU6SXNzdWU4MjM5Nzk2NzA=
10,575
bug in run_finetune
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "sorry my mistake to use last version of examples ", "hi how you solved this. i also face this error. i can't find \"is_offline_mode\" and \"get_full_repo_name\" under transformers.utils. i use https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization", "This should work if you are using `transformers` master/main ", "> https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization\r\n\r\nI miss the same issue. How do you solve it?" ]
1,615
1,700
1,615
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten, @patil-suraj ## Information I am running run_seq2seq.py getting ImportError: cannot import name 'is_offline_mode' from 'transformers.file_utils' ## To reproduce Steps to reproduce the behavior: python run_seq2seq.py thnaks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10575/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10575/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10574
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10574/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10574/comments
https://api.github.com/repos/huggingface/transformers/issues/10574/events
https://github.com/huggingface/transformers/issues/10574
823,933,972
MDU6SXNzdWU4MjM5MzM5NzI=
10,574
The dimension of Feature extraction
{ "login": "LemonQC", "id": 30914380, "node_id": "MDQ6VXNlcjMwOTE0Mzgw", "avatar_url": "https://avatars.githubusercontent.com/u/30914380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LemonQC", "html_url": "https://github.com/LemonQC", "followers_url": "https://api.github.com/users/LemonQC/followers", "following_url": "https://api.github.com/users/LemonQC/following{/other_user}", "gists_url": "https://api.github.com/users/LemonQC/gists{/gist_id}", "starred_url": "https://api.github.com/users/LemonQC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LemonQC/subscriptions", "organizations_url": "https://api.github.com/users/LemonQC/orgs", "repos_url": "https://api.github.com/users/LemonQC/repos", "events_url": "https://api.github.com/users/LemonQC/events{/privacy}", "received_events_url": "https://api.github.com/users/LemonQC/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "How did you create the `nlp_features` function?\r\n\r\nThe sequence length is different due to different tokenization. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
![image](https://user-images.githubusercontent.com/30914380/110243654-4dadf480-7f96-11eb-8a2a-1bff9407c786.png) ![image](https://user-images.githubusercontent.com/30914380/110243672-5999b680-7f96-11eb-91e4-2728099b2418.png) why this happened?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10574/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10573
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10573/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10573/comments
https://api.github.com/repos/huggingface/transformers/issues/10573/events
https://github.com/huggingface/transformers/pull/10573
823,862,909
MDExOlB1bGxSZXF1ZXN0NTg2MjEwOTE4
10,573
Update data_collator.py
{ "login": "good74152", "id": 39672039, "node_id": "MDQ6VXNlcjM5NjcyMDM5", "avatar_url": "https://avatars.githubusercontent.com/u/39672039?v=4", "gravatar_id": "", "url": "https://api.github.com/users/good74152", "html_url": "https://github.com/good74152", "followers_url": "https://api.github.com/users/good74152/followers", "following_url": "https://api.github.com/users/good74152/following{/other_user}", "gists_url": "https://api.github.com/users/good74152/gists{/gist_id}", "starred_url": "https://api.github.com/users/good74152/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/good74152/subscriptions", "organizations_url": "https://api.github.com/users/good74152/orgs", "repos_url": "https://api.github.com/users/good74152/repos", "events_url": "https://api.github.com/users/good74152/events{/privacy}", "received_events_url": "https://api.github.com/users/good74152/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
NONE
null
ๅŠ ไธŠไธญๆ–‡่จป่งฃ
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10573/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10573", "html_url": "https://github.com/huggingface/transformers/pull/10573", "diff_url": "https://github.com/huggingface/transformers/pull/10573.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10573.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10572
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10572/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10572/comments
https://api.github.com/repos/huggingface/transformers/issues/10572/events
https://github.com/huggingface/transformers/issues/10572
823,851,373
MDU6SXNzdWU4MjM4NTEzNzM=
10,572
Import error for class Speech2TextProcessor, Speech2TextTransformerForConditionalGeneration
{ "login": "amiyamandal-dev", "id": 42173775, "node_id": "MDQ6VXNlcjQyMTczNzc1", "avatar_url": "https://avatars.githubusercontent.com/u/42173775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amiyamandal-dev", "html_url": "https://github.com/amiyamandal-dev", "followers_url": "https://api.github.com/users/amiyamandal-dev/followers", "following_url": "https://api.github.com/users/amiyamandal-dev/following{/other_user}", "gists_url": "https://api.github.com/users/amiyamandal-dev/gists{/gist_id}", "starred_url": "https://api.github.com/users/amiyamandal-dev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amiyamandal-dev/subscriptions", "organizations_url": "https://api.github.com/users/amiyamandal-dev/orgs", "repos_url": "https://api.github.com/users/amiyamandal-dev/repos", "events_url": "https://api.github.com/users/amiyamandal-dev/events{/privacy}", "received_events_url": "https://api.github.com/users/amiyamandal-dev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hi @amiyamandal-dev \r\nThank you for your interest in `S2T`. It's still a work in progress and not available on master yet. If you want to try it, you could checkout this PR #10175", "Hey @amiyamandal-dev ,\r\n\r\nThe model is now available on [master](https://huggingface.co/transformers/master/model_doc/speech_to_text.html)! You could install transformers from the source if you want to try it." ]
1,615
1,615
1,615
NONE
null
## Environment info - `transformers` version: 4.4.0.dev0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.8 - PyTorch version (GPU?): 1.7.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help Models: - Speech2TextProcessor, Speech2TextTransformerForConditionalGeneration @patil-suraj ## Information Model I am using (Speech2TextProcessor, Speech2TextTransformerForConditionalGeneration): The problem arises when using: * trying to import the model ## To reproduce Steps to reproduce the behavior: 1. first install master brach 2. then try to import the Speech2TextProcessor, Speech2TextTransformerForConditionalGeneration class then showing import error ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-1-8149e8a8d76d> in <module> 1 import torch ----> 2 from transformers import Speech2TextProcessor, Speech2TextTransformerForConditionalGeneration 3 from datasets import load_dataset 4 import soundfile as sf ImportError: cannot import name 'Speech2TextProcessor' from 'transformers' (unknown location) ``` ## Expected behavior able to run the code from _https://huggingface.co/facebook/s2t-large-librispeech-asr_ from docs one suggestion, if any fine-tuning script for s2t-large-librispeech-asr model it would be a great help thankyou
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10572/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10571
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10571/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10571/comments
https://api.github.com/repos/huggingface/transformers/issues/10571/events
https://github.com/huggingface/transformers/issues/10571
823,820,617
MDU6SXNzdWU4MjM4MjA2MTc=
10,571
Advice on creating/wrapping `PreTrainedModel` to be compatible with the codebase?
{ "login": "HanGuo97", "id": 18187806, "node_id": "MDQ6VXNlcjE4MTg3ODA2", "avatar_url": "https://avatars.githubusercontent.com/u/18187806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HanGuo97", "html_url": "https://github.com/HanGuo97", "followers_url": "https://api.github.com/users/HanGuo97/followers", "following_url": "https://api.github.com/users/HanGuo97/following{/other_user}", "gists_url": "https://api.github.com/users/HanGuo97/gists{/gist_id}", "starred_url": "https://api.github.com/users/HanGuo97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HanGuo97/subscriptions", "organizations_url": "https://api.github.com/users/HanGuo97/orgs", "repos_url": "https://api.github.com/users/HanGuo97/repos", "events_url": "https://api.github.com/users/HanGuo97/events{/privacy}", "received_events_url": "https://api.github.com/users/HanGuo97/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @HanGuo97,\r\n\r\nWe try to keep the GitHub issues for bug reports. Do you mind asking your question on the forum instead? Also there might already be similar questions on the forum, such as https://discuss.huggingface.co/t/create-a-custom-model-that-works-with-any-pretrained-transformer-body/4186. Thanks!", "Got it, thanks for letting me know!" ]
1,615
1,615
1,615
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: NA - Platform: NA - Python version: NA - PyTorch version (GPU?): NA - Tensorflow version (GPU?): NA - Using GPU in script?: NA - Using distributed or parallel set-up in script?: NA ### Who can help @patrickvonplaten <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Thanks for the amazing library! I'm curious if there are instructions on creating a `PreTrainedModel` subclass or creating an `nn.Module` that behaves like a `PreTrainedModel`? Suppose I want to wrap the existing model with some simple additional capabilities inside an `nn.Module`, what are some of the methods that I need to implement/override -- so that they can work well with existing examples? I'm aware of some tutorials on creating a new model, but that seems pretty complicated and involved -- whereas I'm interested in just adding a couple of simple features. For example, in the Seq2Seq example, I have noticed that the function signature of `model.forward` determines what data will (not) be passed to the model (as in [`trainer._remove_unused_columns`](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L458)), and the existence of `model.prepare_decoder_input_ids_from_labels` also influences the input data (as in [`DataCollatorForSeq2Seq .__call__`](https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py#L292)). It'd be great if someone could point me to some guidance on tweaking the model to be compatible with the rest of the codebase. Thanks in advance for your time! Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10571/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10571/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10570
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10570/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10570/comments
https://api.github.com/repos/huggingface/transformers/issues/10570/events
https://github.com/huggingface/transformers/pull/10570
823,812,800
MDExOlB1bGxSZXF1ZXN0NTg2MTc1ODY1
10,570
fix tf doc bug
{ "login": "Sniper970119", "id": 30463691, "node_id": "MDQ6VXNlcjMwNDYzNjkx", "avatar_url": "https://avatars.githubusercontent.com/u/30463691?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sniper970119", "html_url": "https://github.com/Sniper970119", "followers_url": "https://api.github.com/users/Sniper970119/followers", "following_url": "https://api.github.com/users/Sniper970119/following{/other_user}", "gists_url": "https://api.github.com/users/Sniper970119/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sniper970119/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sniper970119/subscriptions", "organizations_url": "https://api.github.com/users/Sniper970119/orgs", "repos_url": "https://api.github.com/users/Sniper970119/repos", "events_url": "https://api.github.com/users/Sniper970119/events{/privacy}", "received_events_url": "https://api.github.com/users/Sniper970119/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
I found there are a different between tfBertForPretraining and BertForPretraining. I Have create a forum at `https://discuss.huggingface.co/t/different-doc-with-bertforpretraining-and-tfbertforpretraining/4167` and get a response.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10570/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10570", "html_url": "https://github.com/huggingface/transformers/pull/10570", "diff_url": "https://github.com/huggingface/transformers/pull/10570.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10570.patch", "merged_at": 1615174310000 }
https://api.github.com/repos/huggingface/transformers/issues/10569
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10569/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10569/comments
https://api.github.com/repos/huggingface/transformers/issues/10569/events
https://github.com/huggingface/transformers/pull/10569
823,785,395
MDExOlB1bGxSZXF1ZXN0NTg2MTYwMDgw
10,569
offline mode for firewalled envs (part 2)
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
In https://github.com/huggingface/transformers/pull/10407 I noticed I missed a few places where `local_files_only` should be overridden for the offline mode, so this PR completes the process. Also rewrote the test to be more readable. Could test TF/Flax too but I don't have tiny models to run quick tests on. @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10569/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10569/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10569", "html_url": "https://github.com/huggingface/transformers/pull/10569", "diff_url": "https://github.com/huggingface/transformers/pull/10569.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10569.patch", "merged_at": 1615222340000 }
https://api.github.com/repos/huggingface/transformers/issues/10568
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10568/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10568/comments
https://api.github.com/repos/huggingface/transformers/issues/10568/events
https://github.com/huggingface/transformers/pull/10568
823,756,843
MDExOlB1bGxSZXF1ZXN0NTg2MTQyNDM4
10,568
Ner label re alignment
{ "login": "elk-cloner", "id": 5828101, "node_id": "MDQ6VXNlcjU4MjgxMDE=", "avatar_url": "https://avatars.githubusercontent.com/u/5828101?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elk-cloner", "html_url": "https://github.com/elk-cloner", "followers_url": "https://api.github.com/users/elk-cloner/followers", "following_url": "https://api.github.com/users/elk-cloner/following{/other_user}", "gists_url": "https://api.github.com/users/elk-cloner/gists{/gist_id}", "starred_url": "https://api.github.com/users/elk-cloner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elk-cloner/subscriptions", "organizations_url": "https://api.github.com/users/elk-cloner/orgs", "repos_url": "https://api.github.com/users/elk-cloner/repos", "events_url": "https://api.github.com/users/elk-cloner/events{/privacy}", "received_events_url": "https://api.github.com/users/elk-cloner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you for addressing this! I left some minor comments/questions.", "@LysandreJik I think this is ready for review now.", "Hey @elk-cloner, @francescorubbo! That's an amazing work you've done here. The added tests are a wonderful addition, and will ensure the pipeline is as robust as it can be.\r\n\r\nTo make reviews easier, could you please fill in the PR description or add a comment mentioning the changes? For example:\r\n- What capabilities have been added\r\n- What are the expected changes from the current behavior\r\n\r\nAnd optionally, if you have the time to:\r\n- Example use cases with code sample enabled by the PR\r\n- Previous use cases with code sample that see the behavior changes\r\n\r\nIf you don't have time to do any of that, that's perfectly fine - just let me know and I'll take care of it as soon as I have a bit of availability. \r\n\r\nThanks again for the great work you've done here!", "This looks good. I'm wondering if you can add some tests to verify the expected behaviour of two other scenarios from the bug report.\r\n\r\nSpecifically, the tests in the PR seem to ensure:\r\nAccenture โ†’ A ##cc ##ent ##ure โ†’ B-ORG O O O โ†’ Accenture (ORG)\r\n\r\n...but does not make assertions for mixed B/I/O labels in the same word:\r\nMax Mustermann โ†’ Max Must ##erman ##n โ†’ B-PER I-PER I-PER O โ†’ Max Mustermann (PER)\r\n\r\n...or inner entity labels surrounded by O labels:\r\nElasticsearch โ†’ El ##astic ##sea #rch โ†’ O O I-MISC O โ†’ Elasticsearch (MISC)\r\n", "@joshdevins Thank you for suggesting to test those additional scenarios. Testing for those helped me identify some bugs in the previous implementation. I believe the new test should cover all three scenarios now.", "@LysandreJik I'll add the requested notes here, as I don't seem to have permissions to edit the PR description. Maybe @elk-cloner can transfer some of the info there.\r\n\r\n> What capabilities have been added\r\n\r\n## label realignment\r\n\r\nToken predictions for subwords can be realigned with 4 different strategies\r\n\r\n- default: reset all subword token predictions except for first token\r\n- first: the prediction for the first token in the word is assigned to all subword tokens\r\n- max: the highest confidence prediction among the subword tokens is assigned to all subword tokens\r\n- average: the average pool of the predictions for all subwords is assigned to all subword tokens\r\n- ignore subwords: enable ignoring subwords by merging tokens\r\n\r\n> What are the expected changes from the current behavior\r\n\r\n## New flag `subword_label_re_alignment` enables realignment.\r\n\r\nAlready existing flag `ignore_subwords` actually enables merging subwords.\r\n\r\n> Example use cases with code sample enabled by the PR\r\n\r\n```\r\nner = transformers.pipeline('ner',\r\n model='elastic/distilbert-base-cased-finetuned-conll03-english',\r\n tokenizer='elastic/distilbert-base-cased-finetuned-conll03-english',\r\n ignore_labels = [],\r\n ignore_subwords=False,\r\n subword_label_re_alignment='average'\r\n )\r\nner('Mark Musterman')\r\n[{'word': 'Mark',\r\n 'score': 0.999686598777771,\r\n 'index': 1,\r\n 'start': 0,\r\n 'end': 4,\r\n 'is_subword': False,\r\n 'entity': 'B-PER'},\r\n {'word': 'Must',\r\n 'score': 0.9995412826538086,\r\n 'index': 2,\r\n 'start': 5,\r\n 'end': 9,\r\n 'is_subword': False,\r\n 'entity': 'I-PER'},\r\n {'word': '##erman',\r\n 'score': 0.9996127486228943,\r\n 'index': 3,\r\n 'start': 9,\r\n 'end': 14,\r\n 'is_subword': True,\r\n 'entity': 'I-PER'}]\r\nner = transformers.pipeline('ner',\r\n model='elastic/distilbert-base-cased-finetuned-conll03-english',\r\n tokenizer='elastic/distilbert-base-cased-finetuned-conll03-english',\r\n ignore_labels = [],\r\n ignore_subwords=True,\r\n subword_label_re_alignment='average'\r\n )\r\nner('Mark Musterman')\r\n[{'word': 'Mark',\r\n 'score': 0.999686598777771,\r\n 'index': 1,\r\n 'start': 0,\r\n 'end': 4,\r\n 'is_subword': False,\r\n 'entity': 'B-PER'},\r\n {'word': 'Musterman',\r\n 'score': 0.9995412826538086,\r\n 'index': 2,\r\n 'start': 5,\r\n 'end': 9,\r\n 'is_subword': False,\r\n 'entity': 'I-PER'}]\r\n```\r\n> Previous use cases with code sample that see the behavior changes\r\n\r\n```\r\nner = transformers.pipeline('ner',\r\n model='elastic/distilbert-base-cased-finetuned-conll03-english',\r\n tokenizer='elastic/distilbert-base-cased-finetuned-conll03-english',\r\n ignore_labels = [],\r\n ignore_subwords=True\r\n )\r\nner('Mark Musterman')\r\n[{'word': 'Mark',\r\n 'score': 0.999686598777771,\r\n 'entity': 'B-PER',\r\n 'index': 1,\r\n 'start': 0,\r\n 'end': 4},\r\n {'word': 'Must',\r\n 'score': 0.9995412826538086,\r\n 'entity': 'I-PER',\r\n 'index': 2,\r\n 'start': 5,\r\n 'end': 9},\r\n {'word': '##erman',\r\n 'score': 0.9996127486228943,\r\n 'entity': 'I-PER',\r\n 'index': 3,\r\n 'start': 9,\r\n 'end': 14}]\r\n```", "Thank you, @francescorubbo, I added them to PR.", "I haven't looked at the code changes yet, but looking at the proposed functionality changes. \r\nReferring to [this comment](https://github.com/huggingface/transformers/issues/10263#issuecomment-782187601):\r\n\r\n> As a general principle, I would argue that if `grouped_entities=True`, we should never be returning sub-words alone. \r\n> Either they're part of a word that has a label, or they're not. I honestly still don't understand what the flag `ignore_subwords` is supposed to control ๐Ÿคท\r\n\r\nIt used to be that `grouped_entities=True` wouldn't treat subwords differently, `ignore_subwords` was added as a way to provide the current default behaviour, while still allowing `ignore_subwords=False` to be set for backwards compatibility. \r\nIndeed I had [similar thoughts](https://github.com/huggingface/transformers/pull/5970#issuecomment-693374424) about how the subwords should be treated, & if there was need for a custom strategy (average,max etc)\r\n\r\nI like the below proposal as it can be seen as an expansion of the current logic:\r\n\r\n> I would propose two flags:\r\n> \r\n> * `grouped_entities` (boolean) -- note that this implies subword grouping/label realignment (see below)\r\n> \r\n> * `True` will group all words into larger entities, e.g. Max Mustermann -> B-PER I-PER -> \"Max Musterman\" (PER)\r\n> * `False` will leave words separated, , e.g. Max Mustermann -> B-PER I-PER -> \"Max Musterman\" (PER)\r\n> * `subword_label_realignment` (boolean or strategy name)\r\n> \r\n> * `True` will use the default for the way the NER fine-tuning was performed, see default suggestions above\r\n> * `False` will leave sub-words alone -- note that this implies that `grouped_entities` should be ignores\r\n> * strategy name -- based on the above strategies\r\n\r\nโ— Except that subword_label_realignment=False shouldn't ignore `grouped_entities`. `grouped_entities` flag refers to B-I grouping not subword grouping. We shouldn't enforce subword grouping with `grouped_entities` flag ! We don't know what user cases there might be that use that combination.\r\n\r\n๐Ÿ‘‰ So my proposed generalized version would be like this:\r\n \r\n`grouped_entities` (Current behaviour is left as is):\r\n * `True` will group all words into larger entities, e.g. Max Mustermann -> B-PER I-PER -> \"Max Musterman\" (PER)\r\n * `False` will leave words separated, , e.g. Max Mustermann -> B-PER I-PER -> \"Max Musterman\" (B-PER I-PER)\"\r\n\r\n `subword_label_realignment` (strategy name) (Replaces `ignore_subwords`)\r\n\r\n* none: Don't treat subwords differently (equal to old ignore_subwords=False)\r\n* first: the prediction for the first token in the word is assigned to the word (equal to old ignore_subwords=True, the current default behaviour)\r\n* max: the highest confidence prediction among the wordpiece tokens is assigned to the word (New feature)\r\n* average: the average pool of the predictions among the wordpiece tokens is assigned to the word (New feature)\r\n\r\nHere `subword_label_realignment` becomes actually an expansion of the `ignore_subwords` flag.\r\n\r\nAlso I don't understand what the below mode is supposed to mean @francescorubbo @elk-cloner \r\n> default: reset all subword token predictions except for first token", "Thank you for the feedback, @LysandreJik @sgugger @cceyda !\r\nI've refactored things as follows:\r\n- the new argument is named `aggregation_strategy` and only determines how score and label of the word are computed if the `ignore_subwords` argument is `True`\r\n- the possible strategies are mapped to the `AggregationStrategy` enum\r\n- expected results for the tests are moved into json fixtures\r\n\r\nNote that I didn't push the refactor as far as @cceyda suggested because I wanted to preserve backward-compatibility, as also requested by @LysandreJik .\r\n\r\nFor some reason merging the latest master is causing the code quality check to fail on files unrelated to this PR...any thought on that?", "There was a new release of the black library which touched a lot of files, so you will need to rebase your PR on master to have the quality tests pass again.", "> There was a new release of the black library which touched a lot of files, so you will need to rebase your PR on master to have the quality tests pass again.\r\n\r\nI did merge master (see 031f3ef39db9b7164bad783ca17086cdcf000389). Shouldn't it address that?", "I think originally there was also mention of saving the `aggregation_strategy` to the model config?\r\nsince it makes the most sense to use the same strategy the model was trained on, ignoring subwords or else.", "> I think originally there was also mention of saving the aggregation_strategy to the model config?\r\n> since it makes the most sense to use the same strategy the model was trained on, ignoring subwords or else.\r\n\r\n@cceyda Yes, this was my original proposal, but I think it might be too much for one PR. I would not close the original issue (https://github.com/huggingface/transformers/issues/10263) until the other items are addressed, but perhaps a new/smaller PR can address saving the strategy used at training/evaluation time to the model config file.", "ugh...this ^ is why I hate rebasing on big project repos...\r\n@sgugger from a cursory look the 215 (!) file diffs look legit, please let me know if this PR needs any more work before you can merge.", "@LysandreJik @sgugger Is there more work needed for this PR? If the rebase is an issue, I can create a new PR with only the relevant changes, but we would loose the commit history.", "We can't see the diff of the PR anymore after the rebase, so you should close this one and open a new one from the same branch please. (GitHub completely sucks at properly showing rebases, unless you force push after the rebase.)" ]
1,615
1,620
1,620
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10263 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [link](https://github.com/huggingface/transformers/issues/10263#issuecomment-781648059) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - pipelines: @LysandreJik, @Narsil, @joshdevins ## What capabilities have been added ? label realignment: token predictions for subwords can be realigned with 4 different strategies - default: reset all subword token predictions except for first token - first: the prediction for the first token in the word is assigned to all subword tokens - max: the highest confidence prediction among the subword tokens is assigned to all subword tokens - average: the average pool of the predictions for all subwords is assigned to all subword tokens - ignore subwords: enable ignoring subwords by merging tokens ## What are the expected changes from the current behavior? - New flag subword_label_re_alignment enables realignment. - Already existing flag ignore_subwords actually enables merging subwords. ## Example use cases with code sample enabled by the PR ``` ner = transformers.pipeline( 'ner', model='elastic/distilbert-base-cased-finetuned-conll03-english', tokenizer='elastic/distilbert-base-cased-finetuned-conll03-english', ignore_labels=[], ignore_subwords=False, subword_label_re_alignment='average' ) ner('Mark Musterman') [ { 'word': 'Mark', 'score': 0.999686598777771, 'index': 1, 'start': 0, 'end': 4, 'is_subword': False, 'entity': 'B-PER' }, { 'word': 'Must', 'score': 0.9995412826538086, 'index': 2, 'start': 5, 'end': 9, 'is_subword': False, 'entity': 'I-PER' }, { 'word': '##erman', 'score': 0.9996127486228943, 'index': 3, 'start': 9, 'end': 14, 'is_subword': True, 'entity': 'I-PER' } ] ner = transformers.pipeline( 'ner', model='elastic/distilbert-base-cased-finetuned-conll03-english', tokenizer='elastic/distilbert-base-cased-finetuned-conll03-english', ignore_labels=[], ignore_subwords=True, subword_label_re_alignment='average' ) ner('Mark Musterman') [ { 'word': 'Mark', 'score': 0.999686598777771, 'index': 1, 'start': 0, 'end': 4, 'is_subword': False, 'entity': 'B-PER' }, { 'word': 'Musterman', 'score': 0.9995412826538086, 'index': 2, 'start': 5, 'end': 9, 'is_subword': False, 'entity': 'I-PER' } ] ``` ## Previous use cases with code sample that see the behavior changes ``` ner = transformers.pipeline( 'ner', model='elastic/distilbert-base-cased-finetuned-conll03-english', tokenizer='elastic/distilbert-base-cased-finetuned-conll03-english', ignore_labels=[], ignore_subwords=True ) ner('Mark Musterman') [ { 'word': 'Mark', 'score': 0.999686598777771, 'entity': 'B-PER', 'index': 1, 'start': 0, 'end': 4 }, { 'word': 'Must', 'score': 0.9995412826538086, 'entity': 'I-PER', 'index': 2, 'start': 5, 'end': 9 }, { 'word': '##erman', 'score': 0.9996127486228943, 'entity': 'I-PER', 'index': 3, 'start': 9, 'end': 14 } ] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10568/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10568", "html_url": "https://github.com/huggingface/transformers/pull/10568", "diff_url": "https://github.com/huggingface/transformers/pull/10568.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10568.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10567
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10567/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10567/comments
https://api.github.com/repos/huggingface/transformers/issues/10567/events
https://github.com/huggingface/transformers/issues/10567
823,748,628
MDU6SXNzdWU4MjM3NDg2Mjg=
10,567
XLSR-53
{ "login": "yagan93", "id": 51398865, "node_id": "MDQ6VXNlcjUxMzk4ODY1", "avatar_url": "https://avatars.githubusercontent.com/u/51398865?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yagan93", "html_url": "https://github.com/yagan93", "followers_url": "https://api.github.com/users/yagan93/followers", "following_url": "https://api.github.com/users/yagan93/following{/other_user}", "gists_url": "https://api.github.com/users/yagan93/gists{/gist_id}", "starred_url": "https://api.github.com/users/yagan93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yagan93/subscriptions", "organizations_url": "https://api.github.com/users/yagan93/orgs", "repos_url": "https://api.github.com/users/yagan93/repos", "events_url": "https://api.github.com/users/yagan93/events{/privacy}", "received_events_url": "https://api.github.com/users/yagan93/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Apparently, someone [just did it](https://huggingface.co/facebook/wav2vec2-large-xlsr). But there are some files missing and it currently unusable. Hopefully the author will soon update it :)", "Pinging @patrickvonplaten for knowledge :)", "Yeah, I just added the pretrained checkpoint. I'll release a notebook by the middle/end of next week on how to fine-tune the checkpoint. Please ping me here again if you can't find it :-)", "@patrickvonplaten Thanks a lot! Cant wait to use it and see how it performs :) ", "@patrickvonplaten cool! is it possible to use with Transformers XLSR-53 finetuned with Fairseq?", "will release a notebook either tomorrow or on Monday about it :-)", "@patrickvonplaten can't wait :)", "Notebook is available here: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 :-) ", "We are organizing a \"fine-tuning XLSR-53\" event. Check this announcement: https://discuss.huggingface.co/t/open-to-the-community-xlsr-wav2vec2-fine-tuning-week-for-low-resource-languages/4467. Would be awesome if you want to participate :-)", "@patrickvonplaten \r\n\r\nHey buddy! \r\n\r\nFirst and foremost I want to thank you again for all your effort! Really appreciate it! \r\n\r\nGot another litte question:\r\n\r\nFine tuned a wav2vec-large-xlsr-53 model on Swiss German (bernese dialect) as laid out in one of your blogs.\r\n\r\nCurrently trying to add an already existing 6-Gram-KenLM on top.\r\n\r\nCould you give me some hints on how to do it? Or is it yet not even possible?\r\n\r\nKind regards\r\nYves :wink:\r\n\r\n", "Hey Yves, \r\n\r\nHere a forum post regarding this issue: https://discuss.huggingface.co/t/language-model-for-wav2vec2-0-decoding/4434", "Hi all,\r\nI am following up on this issue: I am trying to use the pre-trained Wav2Vec2-XLSR-53 (https://huggingface.co/facebook/wav2vec2-large-xlsr-53) and according to the documentation, it should be available as:\r\n\r\n```\r\nfrom transformers import AutoTokenizer, AutoModel\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/wav2vec2-large-xlsr-53\")\r\nmodel = AutoModel.from_pretrained(\"facebook/wav2vec2-large-xlsr-53\")\r\n```\r\nThe model is available, but the tokenizer is not found (error: OSError: Can't load tokenizer for 'facebook/wav2vec2-large-xlsr-53'. Make sure that: (...) ). I tried using Transformers 4.2.2 and 4.5.0 as well as cloning the repository -- no luck. I am able to successfully load e.g. the French version:\r\n\r\n`\ttokenizer = AutoTokenizer.from_pretrained(\"facebook/wav2vec2-large-xlsr-53-french\")\r\n`\r\n\r\nBut not the base XLSR tokenizer? \r\n\r\nThanks so much for the brilliant work!", "Hey @gretatuckute \r\n\r\nCheck out my HuggingFace Profile https://huggingface.co/Yves. There you'll find what you're after. \r\nIf you ask @patrickvonplaten he could also invite you the wav2vec xlsr slack channel :) \r\n\r\nCheers\r\nYves", "Hi @yagan93, thank you for getting back! On your HF profile I only see the Swiss-German tokenizer? ", "@gretatuckute You just got to swop the models and make little adjustments. Check out this notebook for details information on how to do so. ", "Closed by https://github.com/huggingface/transformers/pull/10648" ]
1,615
1,631
1,631
NONE
null
# ๐Ÿš€ Feature request Is it possible to use XLSR-53 with transformers in the near future?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10567/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/10567/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10566
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10566/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10566/comments
https://api.github.com/repos/huggingface/transformers/issues/10566/events
https://github.com/huggingface/transformers/issues/10566
823,731,881
MDU6SXNzdWU4MjM3MzE4ODE=
10,566
from_pretrained() - some model weights not initialized message
{ "login": "jsrozner", "id": 1113285, "node_id": "MDQ6VXNlcjExMTMyODU=", "avatar_url": "https://avatars.githubusercontent.com/u/1113285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jsrozner", "html_url": "https://github.com/jsrozner", "followers_url": "https://api.github.com/users/jsrozner/followers", "following_url": "https://api.github.com/users/jsrozner/following{/other_user}", "gists_url": "https://api.github.com/users/jsrozner/gists{/gist_id}", "starred_url": "https://api.github.com/users/jsrozner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jsrozner/subscriptions", "organizations_url": "https://api.github.com/users/jsrozner/orgs", "repos_url": "https://api.github.com/users/jsrozner/repos", "events_url": "https://api.github.com/users/jsrozner/events{/privacy}", "received_events_url": "https://api.github.com/users/jsrozner/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Duplicate of https://github.com/huggingface/transformers/issues/8933 . This wrong waring should have been fixed in newer versions, see: https://github.com/huggingface/transformers/blob/63c295ac05962b03701bdda87a90595b5f864075/src/transformers/models/t5/modeling_t5.py#L1188", "Great! So *all* weights are in fact loaded from pretrained, or when loading from a checkpoint, then, right?" ]
1,615
1,615
1,615
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.0.1 - Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1+cu110 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: no Note also: cookiecutter dependency is not included in pip install transformers so transformers-cli env initially fails ### Who can help (T5) @patrickvonplaten, @patil-suraj, @sshleifer When using `T5ForConditionalGeneration.from_pretrained('t5-base')`, I get the following warning at load: ``` Some weights of the model checkpoint at t5-large were not used when initializing T5ForConditionalGeneration: ['decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight'] - This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining m odel). - This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification mode l). ``` If I load from a checkpoint that I create (i.e. local file), I get the same message. But I think that all weights are, in fact, identical: - evaluation code on the model I finetune before saving AND - evaluation code on the model I finetune, save, and then reload are identical. This suggests that *all* weights are identical, since performance is identical. This contradicts the warning message. Questions: 1) Are some weights actually not being loaded? If so, how could I observe identical behavior on metrics. Or is this warning wrong? 2) If this warning is correct, how can I force the model to fully load the model exactly as I saved it. 3) Is there any other difference (randomly initialized head, randomly initialized weights) between the t5 that is pretrained and the T5ForConditionalGeneration?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10566/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10565
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10565/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10565/comments
https://api.github.com/repos/huggingface/transformers/issues/10565/events
https://github.com/huggingface/transformers/issues/10565
823,697,614
MDU6SXNzdWU4MjM2OTc2MTQ=
10,565
Mismatch between input and target batch_sizes while training FSMT model
{ "login": "HiGal", "id": 35590424, "node_id": "MDQ6VXNlcjM1NTkwNDI0", "avatar_url": "https://avatars.githubusercontent.com/u/35590424?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HiGal", "html_url": "https://github.com/HiGal", "followers_url": "https://api.github.com/users/HiGal/followers", "following_url": "https://api.github.com/users/HiGal/following{/other_user}", "gists_url": "https://api.github.com/users/HiGal/gists{/gist_id}", "starred_url": "https://api.github.com/users/HiGal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HiGal/subscriptions", "organizations_url": "https://api.github.com/users/HiGal/orgs", "repos_url": "https://api.github.com/users/HiGal/repos", "events_url": "https://api.github.com/users/HiGal/events{/privacy}", "received_events_url": "https://api.github.com/users/HiGal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
Code to reproduce ```python tokenizer = get_fsmt_tokenizer() tokenizer.model_max_length=100 model = get_fsmt_model() freeze_embeds(model) freeze_encoder(model) train_dataset = YandexRuEnDataset("data", split="train") val_dataset = YandexRuEnDataset("data", split="valid") training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # total # of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=5000, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs fp16=True, fp16_opt_level='O2', save_steps=20000 ) trainer = Seq2SeqTrainer( model=model, # the instantiated ๐Ÿค— Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset tokenizer=tokenizer, data_collator=collate_sentences(tokenizer) ) trainer.train() ``` Dataset class ```python class YandexRuEnDataset(Dataset): def __init__(self, root_data, split): src = open(f"{root_data}/corpus.en_ru.1m.ru", "r").readlines() tgt = open(f"{root_data}/corpus.en_ru.1m.en", "r").readlines() # src = src[:int(0.1*len(src))] # tgt = tgt[:int(0.1 * len(tgt))] X_train, X_test, y_train, y_test = train_test_split(src, tgt, test_size=0.33, random_state=228) if split == "train": self.src = X_train self.trg = y_train elif split == "valid": self.src = X_test self.trg = y_test def __len__(self): return len(self.src) def __getitem__(self, idx): src = self.src[idx] trg = self.trg[idx] return src, trg def collate_sentences(tokenizer: Tokenizer): def collate_fn(batch): batch = list(zip(*batch)) X_batch = list(batch[0]) y_batch = list(batch[1]) batch = tokenizer.prepare_seq2seq_batch( src_texts=X_batch, tgt_texts=y_batch, padding=True, truncation=True, return_tensors='pt' ) return batch return collate_fn ``` Exception ``` File "/home/farit/PycharmProjects/NMT/train.py", line 40, in <module> trainer.train() File "/home/farit/anaconda3/envs/NMT/lib/python3.8/site-packages/transformers/trainer.py", line 940, in train tr_loss += self.training_step(model, inputs) File "/home/farit/anaconda3/envs/NMT/lib/python3.8/site-packages/transformers/trainer.py", line 1302, in training_step loss = self.compute_loss(model, inputs) File "/home/farit/anaconda3/envs/NMT/lib/python3.8/site-packages/transformers/trainer.py", line 1334, in compute_loss outputs = model(**inputs) File "/home/farit/anaconda3/envs/NMT/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/farit/anaconda3/envs/NMT/lib/python3.8/site-packages/transformers/models/fsmt/modeling_fsmt.py", line 1180, in forward masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.tgt_vocab_size), labels.view(-1)) File "/home/farit/anaconda3/envs/NMT/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/farit/anaconda3/envs/NMT/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 961, in forward return F.cross_entropy(input, target, weight=self.weight, File "/home/farit/anaconda3/envs/NMT/lib/python3.8/site-packages/torch/nn/functional.py", line 2468, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/home/farit/anaconda3/envs/NMT/lib/python3.8/site-packages/torch/nn/functional.py", line 2261, in nll_loss raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' ValueError: Expected input batch_size (1440) to match target batch_size (1600). ````
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10565/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10564
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10564/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10564/comments
https://api.github.com/repos/huggingface/transformers/issues/10564/events
https://github.com/huggingface/transformers/issues/10564
823,676,731
MDU6SXNzdWU4MjM2NzY3MzE=
10,564
[Causal Language Modeling] seems not as expected
{ "login": "voidful", "id": 10904842, "node_id": "MDQ6VXNlcjEwOTA0ODQy", "avatar_url": "https://avatars.githubusercontent.com/u/10904842?v=4", "gravatar_id": "", "url": "https://api.github.com/users/voidful", "html_url": "https://github.com/voidful", "followers_url": "https://api.github.com/users/voidful/followers", "following_url": "https://api.github.com/users/voidful/following{/other_user}", "gists_url": "https://api.github.com/users/voidful/gists{/gist_id}", "starred_url": "https://api.github.com/users/voidful/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/voidful/subscriptions", "organizations_url": "https://api.github.com/users/voidful/orgs", "repos_url": "https://api.github.com/users/voidful/repos", "events_url": "https://api.github.com/users/voidful/events{/privacy}", "received_events_url": "https://api.github.com/users/voidful/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is not a problem.\r\nWhen the model predicts the word next to \"Ich\" (given \"Ich\"), the word \"Ich\" cannot attend the words in the future positions (e.g., \"will\", \"ein\", etc).\r\nHowever, when the model predicts the word next to \"ein\" (given \"Ich will ein\"), the word \"Ich\" can attend \"will\" and \"ein\", which is not cheating. \r\nSo, the word embeddings of \"Ich\" in the different right contexts are different.", "> This is not a problem.\r\n> When the model predicts the word next to \"Ich\" (given \"Ich\"), the word \"Ich\" cannot attend the words in the future positions (e.g., \"will\", \"ein\", etc).\r\n> However, when the model predicts the word next to \"ein\" (given \"Ich will ein\"), the word \"Ich\" can attend \"will\" and \"ein\", which is not cheating.\r\n> So, the word embeddings of \"Ich\" in the different right contexts are different.\r\n\r\nI agree this is true for transformer encoder models, but for decode models, due to 'casual mask', the left context should not be affected by the right context. Thatโ€˜s why GPT \"Ich\" hidden will not be changed.\r\n\r\nTherefore, I am curious why CausalLM models can not apply this rule.\r\n", "> > This is not a problem.\r\n> > When the model predicts the word next to \"Ich\" (given \"Ich\"), the word \"Ich\" cannot attend the words in the future positions (e.g., \"will\", \"ein\", etc).\r\n> > However, when the model predicts the word next to \"ein\" (given \"Ich will ein\"), the word \"Ich\" can attend \"will\" and \"ein\", which is not cheating.\r\n> > So, the word embeddings of \"Ich\" in the different right contexts are different.\r\n> \r\n> I agree this is true for transformer encoder models, but for decode models, due to 'casual mask', the left context should not be affected by the right context. Thatโ€˜s why GPT \"Ich\" hidden will not be changed.\r\n> \r\n> Therefore, I am curious why CausalLM models can not apply this rule.\r\n\r\n![](https://user-images.githubusercontent.com/10904842/110327232-ac39a800-8054-11eb-82ff-7a36f93e30dc.jpeg)\r\n", "> This is not a problem.\r\n> When the model predicts the word next to \"Ich\" (given \"Ich\"), the word \"Ich\" cannot attend the words in the future positions (e.g., \"will\", \"ein\", etc).\r\n> However, when the model predicts the word next to \"ein\" (given \"Ich will ein\"), the word \"Ich\" can attend \"will\" and \"ein\", which is not cheating.\r\n> So, the word embeddings of \"Ich\" in the different right contexts are different.\r\n\r\nI think that the previous hidden state of the token should not change, since the change of the previous hidden state, there is no way to compute the loss with tokens in once in CausalLM", "I was talking about decoder, not encoder.\r\nThe attention masks vary according to a decoding step.\r\n\r\n(In the following, \"->\" means \"attends to\")\r\nWhen the model predicts the next word given \"Ich\":\r\n- \"Ich\" -> None\r\n\r\nWhen the model predicts the next word given \"Ich will ein\":\r\n- \"Ich\" -> \"will\" and \"ein\"\r\n- \"will\" -> \"Ich\" and \"ein\"\r\n- \"ein\" -> \"Ich\" and \"will\" \r\n\r\nPlease see the \"The Illustrated Masked Self-Attention\" section in the following page.\r\nhttps://jalammar.github.io/illustrated-gpt2/", "> I was talking about decoder, not encoder.\r\n> The attention masks vary according to a decoding step.\r\n> \r\n> (In the following, \"->\" means \"attends to\")\r\n> When the model predicts the next word given \"Ich\":\r\n> \r\n> * \"Ich\" -> None\r\n> \r\n> When the model predicts the next word given \"Ich will ein\":\r\n> \r\n> * \"Ich\" -> \"will\" and \"ein\"\r\n> * \"will\" -> \"Ich\" and \"ein\"\r\n> * \"ein\" -> \"Ich\" and \"will\"\r\n> \r\n> Please see the \"The Illustrated Masked Self-Attention\" section in the following page.\r\n> https://jalammar.github.io/illustrated-gpt2/\r\n\r\nhttps://huggingface.co/blog/encoder-decoder#decoder\r\n\r\nauto-regressive models, such as GPT2, have the same architecture as transformer-based decoder models if one removes the cross-attention layer\r\n\r\nOn a side-note, autoencoding models, such as Bert, have the same architecture as transformer-based encoder models.\r\n\r\nSo, without involving cross-attention, the main difference between transformer encoder and decoder is that encoder uses bi-directional self-attention, decoder uses uni-directional self-attention layer instead.\r\n\r\nIch weight will attend to \"will\", but it's for \"will\" token weight, not for Ich token.\r\n\r\n![](https://user-images.githubusercontent.com/10904842/110340116-dc3c7780-8063-11eb-96a1-8a4b0c80b0b1.jpeg)\r\n", "All the theory is right.\r\nI got the reason, it is because of the bias...\r\n\r\nIn `from_pretrained` function, it will call model.eval() by default which will disable all the bias in model.\r\nhttps://github.com/huggingface/transformers/blob/88a951e3cc00f56b94d9b93dbc35a3812cd88747/src/transformers/modeling_utils.py#L1190\r\n\r\n\r\nHowever in `from_config`, it won't call model.eval by default, so the result is affected by bias.\r\nhttps://github.com/huggingface/transformers/blob/d26b37e744ea980977e266adf48736451b73c583/src/transformers/models/auto/modeling_auto.py#L750\r\n\r\n\r\nTherefore, I suggest that we should call model.eval() in `from_config` as same as `from_pretrained `\r\n\r\n\r\n- @patrickvonplaten\r\n- @LysandreJik\r\n- @patil-suraj", "`model.eval()` does not disable the bias in the model as far as I know. `model.eval()` simply puts the model into \"non training\" mode meaning that dropout layers are not applied, etc.. . I don't think we need to add a `model.eval()` to the `from_config()` function.", "> `model.eval()` does not disable the bias in the model as far as I know. `model.eval()` simply puts the model into \"non training\" mode meaning that dropout layers are not applied, etc.. . I don't think we need to add a `model.eval()` to the `from_config()` function.\r\n\r\nI don't know why I said `bias` ๐Ÿ˜‚, It should be dropout.\r\n\r\nfrom_config() is more likely for training, so it should be fine not to add `model.eval()` by default. \r\n\r\nThanks for your reply~" ]
1,615
1,615
1,615
CONTRIBUTOR
null
# Problem Causal Models is only attended to the left context. Therefore causal models should not depend on the right tokens. For example, The word embedding of "I" will be unchanged no matter what is in the right In GPT2. Since Causal Language Model are uni-directional self-attention. ``` from transformers import AutoModel,AutoTokenizer, AutoConfig import torch # gpt gpt_model = AutoModel.from_pretrained('gpt2') gpt_tokenizer = AutoTokenizer.from_pretrained('gpt2') embeddings = gpt_model.get_input_embeddings() # create ids of encoded input vectors decoder_input_ids = gpt_tokenizer("<pad> Ich will ein", return_tensors="pt", add_special_tokens=False).input_ids # pass decoder input_ids and encoded input vectors to decoder lm_logits = gpt_model(decoder_input_ids).last_hidden_state # change the decoder input slightly decoder_input_ids_perturbed = gpt_tokenizer("<pad> Ich will das", return_tensors="pt", add_special_tokens=False).input_ids lm_logits_perturbed = gpt_model(decoder_input_ids_perturbed).last_hidden_state # compare values of word embedding of "I" for input_ids and perturbed input_ids print("Is encoding for `Ich` equal to its perturbed version?: ", torch.allclose(lm_logits[0, 0], lm_logits_perturbed[0, 0], atol=1e-3)) ``` Result ``` Is encoding for `Ich` equal to its perturbed version?: True ``` However, when it comes to other models, the result is not following the assumption, the logits will be changed when changing the right side input? What is the reason? Is it a bug? I really want to know the answer, thank you! BERT ``` Is encoding for `Ich` equal to its perturbed version?: False ``` BART ``` Is encoding for `Ich` equal to its perturbed version?: False ``` Roberta ``` Is encoding for `Ich` equal to its perturbed version?: False ``` Experiment notebook [colab](https://colab.research.google.com/drive/15V37RWAL40vhrk-uBIh9m99j1gZMLjUy?usp=sharing) ## Environment info - `transformers` version: 4.3.3 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.7.1+cu101 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help - @patrickvonplaten - @LysandreJik - @patil-suraj <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (GPT, Bert, RoBerta, BART ForCausalLM): The problem arises when using: * [ x] the official example scripts: https://huggingface.co/blog/encoder-decoder#decoder ## To reproduce Experiment notebook [colab](https://colab.research.google.com/drive/15V37RWAL40vhrk-uBIh9m99j1gZMLjUy?usp=sharing) ## Expected behavior Causal Models should not be affected by the right context?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10564/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10563
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10563/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10563/comments
https://api.github.com/repos/huggingface/transformers/issues/10563/events
https://github.com/huggingface/transformers/issues/10563
823,672,515
MDU6SXNzdWU4MjM2NzI1MTU=
10,563
I have trained Bert on my own data which has been converted to IDs by using BertForMaskedLM, but when I use the model for the further fine-tuned, I found this error
{ "login": "lisiyuan1209", "id": 56223656, "node_id": "MDQ6VXNlcjU2MjIzNjU2", "avatar_url": "https://avatars.githubusercontent.com/u/56223656?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lisiyuan1209", "html_url": "https://github.com/lisiyuan1209", "followers_url": "https://api.github.com/users/lisiyuan1209/followers", "following_url": "https://api.github.com/users/lisiyuan1209/following{/other_user}", "gists_url": "https://api.github.com/users/lisiyuan1209/gists{/gist_id}", "starred_url": "https://api.github.com/users/lisiyuan1209/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lisiyuan1209/subscriptions", "organizations_url": "https://api.github.com/users/lisiyuan1209/orgs", "repos_url": "https://api.github.com/users/lisiyuan1209/repos", "events_url": "https://api.github.com/users/lisiyuan1209/events{/privacy}", "received_events_url": "https://api.github.com/users/lisiyuan1209/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
NONE
null
@LysandreJik ## code info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> here's my model: ![image](https://user-images.githubusercontent.com/56223656/110211963-61276580-7e99-11eb-8508-42591b5138e6.png) ![image](https://user-images.githubusercontent.com/56223656/110212033-b2cff000-7e99-11eb-8378-116dafe78f8e.png) ## Information Model I am using my own pre-tained Bert model which is stored in the path "../bert": The problem arises when using: ![image](https://user-images.githubusercontent.com/56223656/110212083-fb87a900-7e99-11eb-9457-f4b94da1234b.png) here are the files in the path "../bert": ![image](https://user-images.githubusercontent.com/56223656/110212157-55886e80-7e9a-11eb-8b7f-dce93af1b72e.png) The tasks I am working on is: text matching: * [ ] my own dataset is like: ![ๅฑๅน•ๅฟซ็…ง 2021-03-06 16 29 16](https://user-images.githubusercontent.com/56223656/110211923-2a514f80-7e99-11eb-86ae-c403a7e7001e.png) ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> what exactly is the problem?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10563/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10562
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10562/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10562/comments
https://api.github.com/repos/huggingface/transformers/issues/10562/events
https://github.com/huggingface/transformers/pull/10562
823,581,101
MDExOlB1bGxSZXF1ZXN0NTg2MDEzNDE0
10,562
Stale bot updated
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stas00, @sgugger, please review this PR. Here is a visualization of what would be done by the bot, were it to be merged today: https://github.com/huggingface/transformers/pull/10562/checks?check_run_id=2113061781\r\n\r\nI have verified that all 11 issues that would be closed have received a warning 10 days ago. Thank you." ]
1,615
1,618
1,618
MEMBER
null
This is an updated version of the stale bot. **It is easier to review the file than the diff, you can find the file [here](https://github.com/huggingface/transformers/blob/d1e516ea0fe9e641a75a89e5a7522392f7dbd59d/scripts/stale.py).** It sends a warning message after 23 days of inactivity, and closes the issue/PR if no activity is detected in the following 7 days. It ignores the following labels (case insensitive): - `Good First Issue` - `Good Second Issue` - `Feature Request` - `New Model` - `WIP` If there are assignees on the issue/PR, then it puts the following comment: `f"This issue has been stale for a while, ping @{assignee.login}"` I propose to leave the PR like it is, and I'll push an empty commit daily to check the result of the stale bot test (I'll remove other tests to ensure that we don't spend unnecessary CI credits). Once we verify that it works as expected for a few days, I propose to merge it. You may check the results of the first run here: https://github.com/huggingface/transformers/runs/2045189559?check_suite_focus=true (Second commit was rate limited)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10562/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10562/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10562", "html_url": "https://github.com/huggingface/transformers/pull/10562", "diff_url": "https://github.com/huggingface/transformers/pull/10562.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10562.patch", "merged_at": 1618410273000 }
https://api.github.com/repos/huggingface/transformers/issues/10561
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10561/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10561/comments
https://api.github.com/repos/huggingface/transformers/issues/10561/events
https://github.com/huggingface/transformers/pull/10561
823,558,197
MDExOlB1bGxSZXF1ZXN0NTg1OTk2MjQx
10,561
[examples tests on multigpu] resolving require_torch_non_multi_gpu_but_fix_me
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
A while ago I added `@require_torch_non_multi_gpu_but_fix_me` to quickly allow us to start running example tests on multigpu, so this PR resolves that temporary band-aid. This PR: * fixes a few tests to make them run on multi-gpu * removes the decorator where it's not needed after testing that it works * leaves the dropped legacy tests untouched - since they don't run on CI * eliminates `@require_torch_non_multi_gpu_but_fix_me` from existence since it's no longer needed * the only test I couldn't figure out is https://github.com/huggingface/transformers/issues/10560 but it's not worse off than it was - added some refactoring to it and prepared it for multi-gpu if someone knows how to fix it Note: 2 slow tests in `examples/test_examples.py` currently fail because of yet another missed thing in ported `run_seq2seq.py` - but these should be resolved by https://github.com/huggingface/transformers/pull/10551 once that one is merged, so we can merge this PR after it. @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10561/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10561", "html_url": "https://github.com/huggingface/transformers/pull/10561", "diff_url": "https://github.com/huggingface/transformers/pull/10561.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10561.patch", "merged_at": 1615230700000 }
https://api.github.com/repos/huggingface/transformers/issues/10560
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10560/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10560/comments
https://api.github.com/repos/huggingface/transformers/issues/10560/events
https://github.com/huggingface/transformers/issues/10560
823,555,428
MDU6SXNzdWU4MjM1NTU0Mjg=
10,560
[examples] run_glue_deebert.py distrbuted fails
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @JetRunner ", "@stas00 Well it is just not designed for DP or DDP. DeeBERT is for accelerating inference with bs=1 (especially on CPU). I don't believe it should support DP.", "But yes theoretically it can support multi-GPU training but I'm not sure if it's necessary?", "That's good enough for me, I will leave it at 0 or 1-gpu - no problem - thank you for elaborating about the needs of this example, @JetRunner!" ]
1,615
1,615
1,615
CONTRIBUTOR
null
I'm working on making the tests work under multiple gpus and run into and this one that proved to be stubborn, for some reason it doesn't work under any DP scheme. I don't know anything about this script, To reproduce: Note - you need at least 2 gpus: Actually it fails with 1 gpu too (just change to --nproc_per_node=1) ``` python -m torch.distributed.launch --nproc_per_node=2 examples/research_projects/deebert/run_glue_deebert.py --model_type roberta --model_name_or_path roberta-base --task_name MRPC --do_train --do_eval --do_lower_case --data_dir ./tests/fixtures/tests_samples/MRPC/ --max_seq_length 128 --per_gpu_eval_batch_size=1 --per_gpu_train_batch_size=8 --learning_rate 2e-4 --num_train_epochs 3 --overwrite_output_dir --seed 42 --output_dir ./examples/deebert/saved_models/roberta-base/MRPC/two_stage --plot_data_dir ./examples/deebert/results/ --save_steps 0 --overwrite_cache --eval_after_first_stage W reducer.cpp:1084] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) Traceback (most recent call last): File "examples/research_projects/deebert/run_glue_deebert.py", line 730, in <module> main() File "examples/research_projects/deebert/run_glue_deebert.py", line 645, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "examples/research_projects/deebert/run_glue_deebert.py", line 176, in train outputs = model(**inputs) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 872, in _call_impl return forward_call(*input, **kwargs) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 705, in forward if self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. Since `find_unused_parameters=True` is enabled, this likely means that not all `forward` outputs participate in computing loss. You can fix this by making sure all `forward` function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). [W reducer.cpp:1084] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) Iteration: 100%1/1 [00:00<00:00, 1.83it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s] Epoch: 33 | 1/3 [00:00<00:01, 1.82it/s] Traceback (most recent call last): File "examples/research_projects/deebert/run_glue_deebert.py", line 730, in <module> main() File "examples/research_projects/deebert/run_glue_deebert.py", line 645, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "examples/research_projects/deebert/run_glue_deebert.py", line 176, in train outputs = model(**inputs) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 872, in _call_impl return forward_call(*input, **kwargs) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 705, in forward if self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. Since `find_unused_parameters=True` is enabled, this likely means that not all `forward` outputs participate in computing loss. You can fix this by making sure all `forward` function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). Killing subprocess 2242528 Killing subprocess 2242529 Traceback (most recent call last): File "/home/stas/anaconda3/envs/main-38/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/stas/anaconda3/envs/main-38/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in <module> main() File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/home/stas/anaconda3/envs/main-38/bin/python', '-u', 'examples/research_projects/deebert/run_glue_deebert.py', '--local_rank=1', '--model_type', 'roberta', '--model_name_or_path', 'roberta-base', '--task_name', 'MRPC', '--do_train', '--do_eval', '--do_lower_case', '--data_dir', './tests/fixtures/tests_samples/MRPC/', '--max_seq_length', '128', '--per_gpu_eval_batch_size=1', '--per_gpu_train_batch_size=8', '--learning_rate', '2e-4', '--num_train_epochs', '3', '--overwrite_output_dir', '--seed', '42', '--output_dir', './examples/deebert/saved_models/roberta-base/MRPC/two_stage', '--plot_data_dir', './examples/deebert/results/', '--save_steps', '0', '--overwrite_cache', '--eval_after_first_stage']' returned non-zero exit status 1. ``` @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10560/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10559
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10559/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10559/comments
https://api.github.com/repos/huggingface/transformers/issues/10559/events
https://github.com/huggingface/transformers/issues/10559
823,550,035
MDU6SXNzdWU4MjM1NTAwMzU=
10,559
[website] installation doc blues
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
Switching docs to a different branch seems to be broken from at least one doc, e.g. going from https://huggingface.co/transformers/master/installation.html#caching-models upper left corner - if you want to switch to a different branch it sends you to a 404 page on all of them: it has an issue with the base url, note how it links to: https://huggingface.co/transformers/master/master master twice, so now it prefixes all branches with master/branch ok narrowed it down - it happens specifically on the installation page: https://huggingface.co/transformers/master/installation.html I tried a bunch of other pages and it seems to be ok there. oddly I don't see anything unusual about installation.md in toctree or its content. if you look at the snapshot - instead of showing `master` for the currently selected version it shows a chunk of the url instead. ![snapshot_8](https://user-images.githubusercontent.com/10676103/110193531-41d0ff80-7de9-11eb-9697-293daa527983.png) @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10559/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10558
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10558/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10558/comments
https://api.github.com/repos/huggingface/transformers/issues/10558/events
https://github.com/huggingface/transformers/issues/10558
823,540,809
MDU6SXNzdWU4MjM1NDA4MDk=
10,558
Dear developer, does transformers have the support to translate Chinese text into English?
{ "login": "j2538318409", "id": 48586045, "node_id": "MDQ6VXNlcjQ4NTg2MDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/48586045?v=4", "gravatar_id": "", "url": "https://api.github.com/users/j2538318409", "html_url": "https://github.com/j2538318409", "followers_url": "https://api.github.com/users/j2538318409/followers", "following_url": "https://api.github.com/users/j2538318409/following{/other_user}", "gists_url": "https://api.github.com/users/j2538318409/gists{/gist_id}", "starred_url": "https://api.github.com/users/j2538318409/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/j2538318409/subscriptions", "organizations_url": "https://api.github.com/users/j2538318409/orgs", "repos_url": "https://api.github.com/users/j2538318409/repos", "events_url": "https://api.github.com/users/j2538318409/events{/privacy}", "received_events_url": "https://api.github.com/users/j2538318409/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @j2538318409,\r\n\r\nI think Mbart model from facebook can do that for you [mbart](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt), \r\n\r\nYou need to specify `zh_CN` as the source language and `en_XX` as the target language.\r\n\r\n This [colab notebook](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/MultilingualMBart.ipynb) does translation from English to hindi. You can use the same for doing translation from Chinese to English by modifying `src_lang` and `trg_lang`.\r\n\r\nThere are other translation models also available in huggingface like [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en). You can find more details on [huggingface models section](https://huggingface.co/models?pipeline_tag=translation)\r\n\r\nI hope this will be helpful to you!", "https://huggingface.co/transformers/master/model_doc/m2m_100.html\r\n\r\nM2M is in master since today, is this what you are looking for ?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,614
1,619
1,619
NONE
null
# ๐Ÿš€ Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10558/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10557
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10557/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10557/comments
https://api.github.com/repos/huggingface/transformers/issues/10557/events
https://github.com/huggingface/transformers/issues/10557
823,503,691
MDU6SXNzdWU4MjM1MDM2OTE=
10,557
[RAG] Expected RAG output after fine tuning
{ "login": "nakasato", "id": 18267312, "node_id": "MDQ6VXNlcjE4MjY3MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/18267312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nakasato", "html_url": "https://github.com/nakasato", "followers_url": "https://api.github.com/users/nakasato/followers", "following_url": "https://api.github.com/users/nakasato/following{/other_user}", "gists_url": "https://api.github.com/users/nakasato/gists{/gist_id}", "starred_url": "https://api.github.com/users/nakasato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nakasato/subscriptions", "organizations_url": "https://api.github.com/users/nakasato/orgs", "repos_url": "https://api.github.com/users/nakasato/repos", "events_url": "https://api.github.com/users/nakasato/events{/privacy}", "received_events_url": "https://api.github.com/users/nakasato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @lhoestq and @patrickvonplaten ", "Hello there,\r\n\r\nI am having the exact same issue when trying to finetune rag. I used the masters version of transformers.\r\n\r\nI tried a couple of different things like:\r\n - My own dataset and wikipedia default one\r\n - In a physical machine and in colab\r\n - With ray and pytorch\r\n - With the rag-sequence-base and rag-sequence-nq\r\n\r\nThey all returned the same documents:\r\n git_log.json\r\n hparams.pkl\r\n\r\nAlso, I realized that if the folder with the trained data is empty, the results are the same.\r\n\r\nI am not sure if I am doing something wrong with the implementation or I am not just using the hparams correctly.\r\n\r\nThanks in advance\r\n\r\nMarcos Menon\r\n", "Hi ! If I recall correctly the model is saved using pytorch lightning [on_save_checkpoint](https://pytorch-lightning.readthedocs.io/en/0.4.9/LightningModule/RequiredTrainerInterface/#on_save_checkpoint).\r\nSo the issue might come from the checkpointing config at\r\n\r\nhttps://github.com/huggingface/transformers/blob/2295d783d5787bcd4c99ea0ddb2a9403697fc126/examples/research_projects/rag/callbacks_rag.py#L36-L43", "Hi, @lhoestq. Thanks for your quick response.\r\n\r\nFrom the log output, I believe the system **is not even starting the network training**. Hence, I guess this issue is even a step **before the saving step** - also because I did not change any code provided by the main transformers library.\r\n\r\nAnother reason for it: the **output logs don't change**, even when I run the ```!python finetune_rag.py ...``` keeping my ```data_dir``` totally **empty**. So, I think the system is not training at all **or** maybe there is a mistake in my input, so the code skips the training.\r\n\r\nAnyway, bellow, there's a sample of the training data I'm using. They all have one question per line in the source and the respective expected answer in the target (fine-tune for a QA task).\r\n\r\n**train.source**\r\n```\r\nHow big is the Brazilian coastline?\r\nWhy was the Port of Santos known as the port of death in the past?\r\nWhich Brazilian state has the largest number of coastal cities?\r\n```\r\n**train.target**\r\n```\r\n7,491 km.\r\nThe Yellow Fever.\r\nBahia state.\r\n```", "Oh ok. Maybe this is because you need the `do_train` flag ? See here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/fcf10214e00ede3a3a4d8507022bc8c679c9aff4/examples/research_projects/rag/finetune_rag.sh#L15-L16", "@lhoestq, that's it; it has solved the problem - actually, quite a simple thing.\r\n\r\nSince the central ideia of the fine-tune itself is to provide a way to _train_ the model, I guess it'd be nice to have these params shown in the [README](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag) too - despite of their immediate need, there's no mention of them there.\r\n\r\nAnyway, thank you again, @lhoestq.", "You're totally right they must be in the README. Feel free to open a PR to add it, if you want to contribute :)", "So, that's right. Meanwhile, I'm going to close this issue :)", "@nakasato @MMenonJ I am also fine-tuning the RAG for my custom dataset. I am using rag-token model. Although I use an already trained rag, the loss starts around 70. Can you let me know how your loss changes? At what value it starts?", "Hi, @shamanez. Sure: in my last training round, with a dataset of ~30MB (for DPR) and 2400 question-answer pairs in the training data for fine-tune, the loss started off at 118.2, and ended at 30.2, after 100 epochs. I'm using a rag-sequence-base model. In different settings I've tried so far, however, it's common to see the same pattern: it starts around ~130 and ends around ~30.\r\n\r\nNevertheless, maybe because of the extreme specificity of my data (abstracts data), or because of the quality of the question-answer pairs I have (which were generated automatically with a T5 model), the final results were a lot nonsense, in this case. \r\n\r\nBtw, since you're also working with RAG, perhaps we can exchange our working experience. Feel free to send me an email ;)", "Thanks a lot. I did some modifications to RAG .. like end to end training of the retrival. Now the code is allmost finish. I will share it very soon with documentation. ", "Cool. Good job! ;)" ]
1,614
1,617
1,616
NONE
null
Hi there. Perhaps the following isnโ€™t even a real issue, but Iโ€™m a bit confused with the current outputs I got. Iโ€™m trying to fine tune RAG on a bunch of question-answer pairs I have (for while, not that much, < 1k ones). I have splitted them as suggested (train.source, train.target, val.sourceโ€ฆ). After running the ```finetune_rag.py```, the outputs generated were **only two files (~2 kB)**: - git_log.json - hparams.pkl Is that right? Because I was expecting *a big binary file or something like that containing the weight matrices*, so I could use them afterwards in a new trial. Could you please tell me whatโ€™s the point Iโ€™m missing here? ---------------------- I provide more details below. Btw, I have two NVIDIA RTX 3090, 24GB each, but they were barely used in the whole process (which took ~3 hours). **Command:** ``` python finetune_rag.py \ --data_dir rag_manual_qa_finetuning \ --output_dir output_ft \ --model_name_or_path rag-sequence-base \ --model_type rag_sequence \ --gpus 2 \ --distributed_retriever pytorch ``` **Logs** (in fact, itโ€™s strange but the logs even seem to be generated in duplicate - I donโ€™t know why): ``` loading configuration file rag-sequence-base/config.json Model config RagConfig { "architectures": [ "RagSequenceForGeneration" ], "dataset": "wiki_dpr", "dataset_split": "train", "do_deduplication": true, "do_marginalize": false, "doc_sep": " // ", "exclude_bos_score": false, "forced_eos_token_id": 2, "generator": { "_name_or_path": "", "_num_labels": 3, "activation_dropout": 0.0, "activation_function": "gelu", "add_bias_logits": false, "add_cross_attention": false, "add_final_layer_norm": false, "architectures": [ "BartModel", "BartForMaskedLM", "BartForSequenceClassification" ], "attention_dropout": 0.0, "bad_words_ids": null, "bos_token_id": 0, "chunk_size_feed_forward": 0, "classif_dropout": 0.0, "classifier_dropout": 0.0, "d_model": 1024, "decoder_attention_heads": 16, "decoder_ffn_dim": 4096, "decoder_layerdrop": 0.0, "decoder_layers": 12, "decoder_start_token_id": 2, "diversity_penalty": 0.0, "do_sample": false, "dropout": 0.1, "early_stopping": false, "encoder_attention_heads": 16, "encoder_ffn_dim": 4096, "encoder_layerdrop": 0.0, "encoder_layers": 12, "encoder_no_repeat_ngram_size": 0, "eos_token_id": 2, "extra_pos_embeddings": 2, "finetuning_task": null, "force_bos_token_to_be_generated": false, "forced_bos_token_id": null, "forced_eos_token_id": 2, "gradient_checkpointing": false, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2" }, "init_std": 0.02, "is_decoder": false, "is_encoder_decoder": true, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2 }, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 1024, "min_length": 0, "model_type": "bart", "no_repeat_ngram_size": 0, "normalize_before": false, "normalize_embedding": true, "num_beam_groups": 1, "num_beams": 1, "num_hidden_layers": 12, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_past": false, "output_scores": false, "pad_token_id": 1, "prefix": " ", "pruned_heads": {}, "repetition_penalty": 1.0, "return_dict": false, "return_dict_in_generate": false, "scale_embedding": false, "sep_token_id": null, "static_position_embeddings": false, "task_specific_params": { "summarization": { "early_stopping": true, "length_penalty": 2.0, "max_length": 142, "min_length": 56, "no_repeat_ngram_size": 3, "num_beams": 4 } }, "temperature": 1.0, "tie_encoder_decoder": false, "tie_word_embeddings": true, "tokenizer_class": null, "top_k": 50, "top_p": 1.0, "torchscript": false, "transformers_version": "4.4.0.dev0", "use_bfloat16": false, "use_cache": true, "vocab_size": 50265 }, "index_name": "exact", "index_path": null, "is_encoder_decoder": true, "label_smoothing": 0.0, "max_combined_length": 300, "model_type": "rag", "n_docs": 5, "output_retrieved": false, "passages_path": null, "question_encoder": { "_name_or_path": "", "add_cross_attention": false, "architectures": [ "DPRQuestionEncoder" ], "attention_probs_dropout_prob": 0.1, "bad_words_ids": null, "bos_token_id": null, "chunk_size_feed_forward": 0, "decoder_start_token_id": null, "diversity_penalty": 0.0, "do_sample": false, "early_stopping": false, "encoder_no_repeat_ngram_size": 0, "eos_token_id": null, "finetuning_task": null, "forced_bos_token_id": null, "forced_eos_token_id": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": false, "is_encoder_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-12, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 512, "min_length": 0, "model_type": "dpr", "no_repeat_ngram_size": 0, "num_attention_heads": 12, "num_beam_groups": 1, "num_beams": 1, "num_hidden_layers": 12, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_scores": false, "pad_token_id": 0, "position_embedding_type": "absolute", "prefix": null, "projection_dim": 0, "pruned_heads": {}, "repetition_penalty": 1.0, "return_dict": false, "return_dict_in_generate": false, "sep_token_id": null, "task_specific_params": null, "temperature": 1.0, "tie_encoder_decoder": false, "tie_word_embeddings": true, "tokenizer_class": null, "top_k": 50, "top_p": 1.0, "torchscript": false, "transformers_version": "4.4.0.dev0", "type_vocab_size": 2, "use_bfloat16": false, "use_cache": true, "vocab_size": 30522 }, "reduce_loss": false, "retrieval_batch_size": 8, "retrieval_vector_size": 768, "title_sep": " / ", "use_cache": true, "use_dummy_dataset": false, "vocab_size": null } Model name 'rag-sequence-base' not found in model shortcut name list (facebook/dpr-question_encoder-single-nq-base, facebook/dpr-question_encoder-multiset-base). Assuming 'rag-sequence-base' is a path, a model identifier, or url to a directory containing tokenizer files. Didn't find file rag-sequence-base/question_encoder_tokenizer/tokenizer.json. We won't load it. Didn't find file rag-sequence-base/question_encoder_tokenizer/added_tokens.json. We won't load it. loading file rag-sequence-base/question_encoder_tokenizer/vocab.txt loading file None loading file None loading file rag-sequence-base/question_encoder_tokenizer/special_tokens_map.json loading file rag-sequence-base/question_encoder_tokenizer/tokenizer_config.json Model name 'rag-sequence-base' not found in model shortcut name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). Assuming 'rag-sequence-base' is a path, a model identifier, or url to a directory containing tokenizer files. Didn't find file rag-sequence-base/generator_tokenizer/tokenizer.json. We won't load it. Didn't find file rag-sequence-base/generator_tokenizer/added_tokens.json. We won't load it. loading file rag-sequence-base/generator_tokenizer/vocab.json loading file rag-sequence-base/generator_tokenizer/merges.txt loading file None loading file None loading file rag-sequence-base/generator_tokenizer/special_tokens_map.json loading file rag-sequence-base/generator_tokenizer/tokenizer_config.json Loading passages from wiki_dpr Downloading: 9.64kB [00:00, 10.8MB/s] Downloading: 67.5kB [00:00, 59.5MB/s] WARNING:datasets.builder:Using custom data configuration psgs_w100.nq.no_index-dummy=False,with_index=False Downloading and preparing dataset wiki_dpr/psgs_w100.nq.no_index (download: 66.09 GiB, generated: 73.03 GiB, post-processed: Unknown size, total: 139.13 GiB) to /home/usp/.cache/huggingface/datasets/wiki_dpr/psgs_w100.nq.no_index-dummy=False,with_index=False/0.0.0/91b145e64f5bc8b55a7b3e9f730786ad6eb19cd5bc020e2e02cdf7d0cb9db9c1... Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4.69G/4.69G [07:11<00:00, 10.9MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:27<00:00, 9.00MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:36<00:00, 8.47MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:37<00:00, 8.41MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:38<00:00, 8.36MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:40<00:00, 8.25MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:58<00:00, 7.45MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:58<00:00, 7.43MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:00<00:00, 7.34MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:04<00:00, 7.17MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:05<00:00, 7.13MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:07<00:00, 7.06MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:10<00:00, 6.94MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:24<00:00, 6.48MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.32G/1.32G [03:27<00:00, 6.38MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:33<00:00, 6.21MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [04:57<00:00, 4.45MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:36<00:00, 8.47MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:28<00:00, 8.94MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:44<00:00, 8.03MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:55<00:00, 7.54MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:28<00:00, 8.92MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:28<00:00, 8.90MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:56<00:00, 7.49MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:19<00:00, 6.63MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:53<00:00, 7.63MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:00<00:00, 7.33MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:11<00:00, 6.92MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:14<00:00, 6.80MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.32G/1.32G [03:06<00:00, 7.10MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:35<00:00, 6.16MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:50<00:00, 5.76MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:28<00:00, 8.93MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:32<00:00, 8.67MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:07<00:00, 7.05MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:53<00:00, 7.62MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:22<00:00, 6.56MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:47<00:00, 7.93MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:26<00:00, 9.06MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:40<00:00, 8.25MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:42<00:00, 8.17MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:54<00:00, 7.59MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:41<00:00, 8.22MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:18<00:00, 6.69MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:30<00:00, 8.83MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [03:00<00:00, 7.34MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:20<00:00, 9.44MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:24<00:00, 9.19MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:21<00:00, 9.38MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:18<00:00, 9.59MB/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.33G/1.33G [02:19<00:00, 9.53MB/s] 0 examples [00:00, ? examples/s]2021-03-05 12:11:39.666323: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 Dataset wiki_dpr downloaded and prepared to /home/usp/.cache/huggingface/datasets/wiki_dpr/psgs_w100.nq.no_index-dummy=False,with_index=False/0.0.0/91b145e64f5bc8b55a7b3e9f730786ad6eb19cd5bc020e2e02cdf7d0cb9db9c1. Subsequent calls will reuse this data. loading weights file rag-sequence-base/pytorch_model.bin All model checkpoint weights were used when initializing RagSequenceForGeneration. Some weights of RagSequenceForGeneration were not initialized from the model checkpoint at rag-sequence-base and are newly initialized: ['rag.generator.lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. loading configuration file rag-sequence-base/config.json Model config RagConfig { "architectures": [ "RagSequenceForGeneration" ], "dataset": "wiki_dpr", "dataset_split": "train", "do_deduplication": true, "do_marginalize": false, "doc_sep": " // ", "exclude_bos_score": false, "forced_eos_token_id": 2, "generator": { "_name_or_path": "", "_num_labels": 3, "activation_dropout": 0.0, "activation_function": "gelu", "add_bias_logits": false, "add_cross_attention": false, "add_final_layer_norm": false, "architectures": [ "BartModel", "BartForMaskedLM", "BartForSequenceClassification" ], "attention_dropout": 0.0, "bad_words_ids": null, "bos_token_id": 0, "chunk_size_feed_forward": 0, "classif_dropout": 0.0, "classifier_dropout": 0.0, "d_model": 1024, "decoder_attention_heads": 16, "decoder_ffn_dim": 4096, "decoder_layerdrop": 0.0, "decoder_layers": 12, "decoder_start_token_id": 2, "diversity_penalty": 0.0, "do_sample": false, "dropout": 0.1, "early_stopping": false, "encoder_attention_heads": 16, "encoder_ffn_dim": 4096, "encoder_layerdrop": 0.0, "encoder_layers": 12, "encoder_no_repeat_ngram_size": 0, "eos_token_id": 2, "extra_pos_embeddings": 2, "finetuning_task": null, "force_bos_token_to_be_generated": false, "forced_bos_token_id": null, "forced_eos_token_id": 2, "gradient_checkpointing": false, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2" }, "init_std": 0.02, "is_decoder": false, "is_encoder_decoder": true, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2 }, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 1024, "min_length": 0, "model_type": "bart", "no_repeat_ngram_size": 0, "normalize_before": false, "normalize_embedding": true, "num_beam_groups": 1, "num_beams": 1, "num_hidden_layers": 12, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_past": false, "output_scores": false, "pad_token_id": 1, "prefix": " ", "pruned_heads": {}, "repetition_penalty": 1.0, "return_dict": false, "return_dict_in_generate": false, "scale_embedding": false, "sep_token_id": null, "static_position_embeddings": false, "task_specific_params": { "summarization": { "early_stopping": true, "length_penalty": 2.0, "max_length": 142, "min_length": 56, "no_repeat_ngram_size": 3, "num_beams": 4 } }, "temperature": 1.0, "tie_encoder_decoder": false, "tie_word_embeddings": true, "tokenizer_class": null, "top_k": 50, "top_p": 1.0, "torchscript": false, "transformers_version": "4.4.0.dev0", "use_bfloat16": false, "use_cache": true, "vocab_size": 50265 }, "index_name": "exact", "index_path": null, "is_encoder_decoder": true, "label_smoothing": 0.0, "max_combined_length": 300, "model_type": "rag", "n_docs": 5, "output_retrieved": false, "passages_path": null, "question_encoder": { "_name_or_path": "", "add_cross_attention": false, "architectures": [ "DPRQuestionEncoder" ], "attention_probs_dropout_prob": 0.1, "bad_words_ids": null, "bos_token_id": null, "chunk_size_feed_forward": 0, "decoder_start_token_id": null, "diversity_penalty": 0.0, "do_sample": false, "early_stopping": false, "encoder_no_repeat_ngram_size": 0, "eos_token_id": null, "finetuning_task": null, "forced_bos_token_id": null, "forced_eos_token_id": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": false, "is_encoder_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-12, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 512, "min_length": 0, "model_type": "dpr", "no_repeat_ngram_size": 0, "num_attention_heads": 12, "num_beam_groups": 1, "num_beams": 1, "num_hidden_layers": 12, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_scores": false, "pad_token_id": 0, "position_embedding_type": "absolute", "prefix": null, "projection_dim": 0, "pruned_heads": {}, "repetition_penalty": 1.0, "return_dict": false, "return_dict_in_generate": false, "sep_token_id": null, "task_specific_params": null, "temperature": 1.0, "tie_encoder_decoder": false, "tie_word_embeddings": true, "tokenizer_class": null, "top_k": 50, "top_p": 1.0, "torchscript": false, "transformers_version": "4.4.0.dev0", "type_vocab_size": 2, "use_bfloat16": false, "use_cache": true, "vocab_size": 30522 }, "reduce_loss": false, "retrieval_batch_size": 8, "retrieval_vector_size": 768, "title_sep": " / ", "use_cache": true, "use_dummy_dataset": false, "vocab_size": null } Model name 'rag-sequence-base' not found in model shortcut name list (facebook/dpr-question_encoder-single-nq-base, facebook/dpr-question_encoder-multiset-base). Assuming 'rag-sequence-base' is a path, a model identifier, or url to a directory containing tokenizer files. Didn't find file rag-sequence-base/question_encoder_tokenizer/tokenizer.json. We won't load it. Didn't find file rag-sequence-base/question_encoder_tokenizer/added_tokens.json. We won't load it. loading file rag-sequence-base/question_encoder_tokenizer/vocab.txt loading file None loading file None loading file rag-sequence-base/question_encoder_tokenizer/special_tokens_map.json loading file rag-sequence-base/question_encoder_tokenizer/tokenizer_config.json Model name 'rag-sequence-base' not found in model shortcut name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). Assuming 'rag-sequence-base' is a path, a model identifier, or url to a directory containing tokenizer files. Didn't find file rag-sequence-base/generator_tokenizer/tokenizer.json. We won't load it. Didn't find file rag-sequence-base/generator_tokenizer/added_tokens.json. We won't load it. loading file rag-sequence-base/generator_tokenizer/vocab.json loading file rag-sequence-base/generator_tokenizer/merges.txt loading file None loading file None loading file rag-sequence-base/generator_tokenizer/special_tokens_map.json loading file rag-sequence-base/generator_tokenizer/tokenizer_config.json GPU available: True, used: True INFO:lightning:GPU available: True, used: True TPU available: False, using: 0 TPU cores INFO:lightning:TPU available: False, using: 0 TPU cores LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1] INFO:lightning:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10557/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10557/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10556
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10556/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10556/comments
https://api.github.com/repos/huggingface/transformers/issues/10556/events
https://github.com/huggingface/transformers/pull/10556
823,500,968
MDExOlB1bGxSZXF1ZXN0NTg1OTUwMjQ4
10,556
Layoutlm tf
{ "login": "atahmasb", "id": 25216362, "node_id": "MDQ6VXNlcjI1MjE2MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/25216362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/atahmasb", "html_url": "https://github.com/atahmasb", "followers_url": "https://api.github.com/users/atahmasb/followers", "following_url": "https://api.github.com/users/atahmasb/following{/other_user}", "gists_url": "https://api.github.com/users/atahmasb/gists{/gist_id}", "starred_url": "https://api.github.com/users/atahmasb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atahmasb/subscriptions", "organizations_url": "https://api.github.com/users/atahmasb/orgs", "repos_url": "https://api.github.com/users/atahmasb/repos", "events_url": "https://api.github.com/users/atahmasb/events{/privacy}", "received_events_url": "https://api.github.com/users/atahmasb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Oh, no! Did you have some issues with a rebase? Can we help in any way?", "> Oh, no! Did you have some issues with a rebase? Can we help in any way?\r\n\r\nI do! For some reason when I rebased I was not able to push my changes. It was rejected because my branch was diverged too much form the remote. Then my option was to pull all changes which results in many file changes that are not mine. ", "Ah! Would you like me to try and retrieve your commits and push them on a new branch of your repository? I can take care of the rebasing as well.", "> Ah! Would you like me to try and retrieve your commits and push them on a new branch of your repository? I can take care of the rebasing as well.\r\n\r\nthat would be great, ty.", "I'm getting permission denied on your fork, can you invite me to it so I can push the new branch? Thanks!", "> I'm getting permission denied on your fork, can you invite me to it so I can push the new branch? Thanks!\r\n\r\ndone. Let me know if there was any access issues and Thanks again for helping me with this", "You can find the branch [here](https://github.com/atahmasb/transformers/tree/layout-lm-tf-2)! I've rebased it for you, and fixed the code quality issues. The `TFLayoutLMForSequenceClassification` class was in double so I removed one of them. Let me know if this shouldn't have been removed!", "> You can find the branch [here](https://github.com/atahmasb/transformers/tree/layout-lm-tf-2)! I've rebased it for you, and fixed the code quality issues. The `TFLayoutLMForSequenceClassification` class was in double so I removed one of them. Let me know if this shouldn't have been removed!\r\n\r\nThanks! You're awesome!" ]
1,614
1,615
1,615
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds TF version of LayoutLM for issue [(10312)](https://github.com/huggingface/transformers/issues/10312) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10556/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10556", "html_url": "https://github.com/huggingface/transformers/pull/10556", "diff_url": "https://github.com/huggingface/transformers/pull/10556.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10556.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10555
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10555/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10555/comments
https://api.github.com/repos/huggingface/transformers/issues/10555/events
https://github.com/huggingface/transformers/pull/10555
823,460,259
MDExOlB1bGxSZXF1ZXN0NTg1OTE2NDAy
10,555
Add new GLUE example with no Trainer.
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This kind of logging is very useful for researchers. Let's add them back?\r\n\r\nhttps://github.com/google-research/bert/blob/master/run_classifier.py#L871", "In a nutshell, I'll burst into tears if we can just have Google's `run_classifier.py` back but with `accelerate` :)", "Maybe we should tag other researchers (even external) to give some feedback. cc @VictorSanh @TevenLeScao ", "Addressed most of your comments except the logging/saving steps. I do not have time to add this right now, so I suggest we merge the current version and someone from the community can finish it." ]
1,614
1,615
1,615
COLLABORATOR
null
# What does this PR do? This PR adds a new GLUE example that does not use the `Trainer`, leveraging [accelerate](https://github.com/huggingface/accelerate) for the distributed training. The necessary instructions are added in the text-classification README. cc @JetRunner as it should be of interest to you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10555/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10555", "html_url": "https://github.com/huggingface/transformers/pull/10555", "diff_url": "https://github.com/huggingface/transformers/pull/10555.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10555.patch", "merged_at": 1615386559000 }
https://api.github.com/repos/huggingface/transformers/issues/10554
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10554/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10554/comments
https://api.github.com/repos/huggingface/transformers/issues/10554/events
https://github.com/huggingface/transformers/pull/10554
823,364,477
MDExOlB1bGxSZXF1ZXN0NTg1ODM0NDg5
10,554
Fixed dead link in Trainer documentation
{ "login": "joawar", "id": 46854160, "node_id": "MDQ6VXNlcjQ2ODU0MTYw", "avatar_url": "https://avatars.githubusercontent.com/u/46854160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joawar", "html_url": "https://github.com/joawar", "followers_url": "https://api.github.com/users/joawar/followers", "following_url": "https://api.github.com/users/joawar/following{/other_user}", "gists_url": "https://api.github.com/users/joawar/gists{/gist_id}", "starred_url": "https://api.github.com/users/joawar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joawar/subscriptions", "organizations_url": "https://api.github.com/users/joawar/orgs", "repos_url": "https://api.github.com/users/joawar/repos", "events_url": "https://api.github.com/users/joawar/events{/privacy}", "received_events_url": "https://api.github.com/users/joawar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,614
1,614
1,614
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10548 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10554/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10554", "html_url": "https://github.com/huggingface/transformers/pull/10554", "diff_url": "https://github.com/huggingface/transformers/pull/10554.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10554.patch", "merged_at": 1614974197000 }
https://api.github.com/repos/huggingface/transformers/issues/10553
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10553/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10553/comments
https://api.github.com/repos/huggingface/transformers/issues/10553/events
https://github.com/huggingface/transformers/pull/10553
823,287,894
MDExOlB1bGxSZXF1ZXN0NTg1NzcyMTQx
10,553
Transformers upgrade
{ "login": "kailashkarthiks", "id": 78363282, "node_id": "MDQ6VXNlcjc4MzYzMjgy", "avatar_url": "https://avatars.githubusercontent.com/u/78363282?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kailashkarthiks", "html_url": "https://github.com/kailashkarthiks", "followers_url": "https://api.github.com/users/kailashkarthiks/followers", "following_url": "https://api.github.com/users/kailashkarthiks/following{/other_user}", "gists_url": "https://api.github.com/users/kailashkarthiks/gists{/gist_id}", "starred_url": "https://api.github.com/users/kailashkarthiks/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kailashkarthiks/subscriptions", "organizations_url": "https://api.github.com/users/kailashkarthiks/orgs", "repos_url": "https://api.github.com/users/kailashkarthiks/repos", "events_url": "https://api.github.com/users/kailashkarthiks/events{/privacy}", "received_events_url": "https://api.github.com/users/kailashkarthiks/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,614
1,614
1,614
NONE
null
Transformers upgrade - redoing all ec-ml related changes
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10553/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10553/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10553", "html_url": "https://github.com/huggingface/transformers/pull/10553", "diff_url": "https://github.com/huggingface/transformers/pull/10553.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10553.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10552
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10552/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10552/comments
https://api.github.com/repos/huggingface/transformers/issues/10552/events
https://github.com/huggingface/transformers/pull/10552
823,269,619
MDExOlB1bGxSZXF1ZXN0NTg1NzU2OTkz
10,552
Handle padding in decoder_inputs_id when using generate
{ "login": "LittlePea13", "id": 26126169, "node_id": "MDQ6VXNlcjI2MTI2MTY5", "avatar_url": "https://avatars.githubusercontent.com/u/26126169?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LittlePea13", "html_url": "https://github.com/LittlePea13", "followers_url": "https://api.github.com/users/LittlePea13/followers", "following_url": "https://api.github.com/users/LittlePea13/following{/other_user}", "gists_url": "https://api.github.com/users/LittlePea13/gists{/gist_id}", "starred_url": "https://api.github.com/users/LittlePea13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LittlePea13/subscriptions", "organizations_url": "https://api.github.com/users/LittlePea13/orgs", "repos_url": "https://api.github.com/users/LittlePea13/repos", "events_url": "https://api.github.com/users/LittlePea13/events{/privacy}", "received_events_url": "https://api.github.com/users/LittlePea13/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Some tests failed due to pad_token_id being None. Is this really a possibility for any Transformer model?", "Added a check for `token_pad_id` equal to `None` and tests pass, but it is not very elegant, any feedback will be appreciated.", "I just noticed that one test, `run_tests_flax`, failed, however no changes are made that should affect that. @patrickvonplaten or @patil-suraj, when you have the time let me know if there's anything else I should be aware of regarding that test.", "Hi @LittlePea13 \r\n\r\nThanks a lot for the PR. \r\n\r\nI understand the problem but it seems like an edge case and overall I'm not really in favor of supporting this. The philosophy of `generate` is to keep it simple and extensible and not try to cover all use-cases.\r\nWe generally try to keep such if/else statements to a minimum. This change will introduce a lot of complexity in the code.\r\n\r\nAlso with this change, we won't be able to use `use_cache`, which will slow down generation significantly.\r\n\r\nOne could always just call generate multiple times if the `decoder_input_ids` are of different length. I would rater batch the sentences together which require the `decoder_input_ids` of the same length and then pass those to `generate` instead of passing `decoder_input_ids` of different lengths. Which would cover this use-case.\r\n\r\nBut thanks a lot for your work! It's a good practice to first propose and discuss the solution in the issues before opening a PR. \r\n\r\nWhat do you think @patrickvonplaten?", "Hi @patil-suraj thanks for the feedback, I agree that this introduces something specific, and I am not too happy on how it is dealt with for the reasons you point out. However, it seemed useful to have a way to deal with it since it is what one would expect if one inputs `decoder_input_ids` to the `generate` function with some padding.\r\n\r\nI opened an issue but was too impatient and opened a PR (sorry about that), basically because I needed this for my own work and coded it anyways.\r\n\r\nPerhaps this doesn't belong here, but in any case I feel like including extra documentation about `decoder_input_ids` [here](https://huggingface.co/transformers/main_classes/model.html?highlight=beam%20search#transformers.generation_utils.GenerationMixin.generate) would be beneficial, maybe explaining this behavior (ie. they have to be of the same length).", "Hey @LittlePea13, \r\n\r\nThanks for raising awareness for your problem and thanks for opening a PR! I agree with @patil-suraj here and would prefer to not include such specific code in `generate()`. \r\n\r\nIn general the philosophy for more specific use cases of `generate()` is to directly use the \"sub\"-generate methods, such as `sample()`, `greedy_search()`, and `beam_search()` as explained here: https://discuss.huggingface.co/t/big-generate-refactor/1857\r\n\r\nI think in your use case, we could do a similar trick that what was done for GPT2 for batched inference:\r\nhttps://discuss.huggingface.co/t/batch-generation-with-gpt2/1517\r\n\r\nThis means that instead of passing `[\"This is <PAD> <PAD>\", \"This is a sentence\"]` as `decoder_input_ids` you could pass `[\"<PAD> <PAD> This is\", \"This is a sentence\"]` to make `generate()` work. Could you try this out? Also, I'd recommend to directly use `beam_search()` instead of `generate()` in your example. If it doesn't work feel free to post on the forum: https://discuss.huggingface.co/ and tag me - I'll try to help you make it work then :-) ", "I think a well-written forum post would also be a great way of documenting this behavior. However I do think, it's a bit too specific for the general docs of `generate()` since they don't even include `decoder_input_ids` as an input argument to the function.", "@patil-suraj and I will keep an eye out if more people run into this problem! Thanks a lot for bringing it up in any case :-)", "Thanks both for the review. Indeed, I didn't think of just moving the padding to the left, much more elegant. I tried it out and it works but it produces different outputs than without padding.\r\n\r\nThe issue here is that by using directly the different \"sub\"-generate methods is not possible to apply the same changes, so if one wants to have the same results as if there was no difference in the sentences lengths they would still need to do a similar tweak as the one here on each method.\r\n\r\nBut this is very narrow, I don't even know if it affects performance in my case when compared to moving padding to the left. I am closing this and in case someone has a similar issue you can just refer to the changes here. \r\n\r\nCheers!" ]
1,614
1,616
1,616
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10478 for pytorch version of `generate()` in generate_utils.py As described in the issue, when `decoder_input_ids` have different lengths and require padding, generation continues after the padding tokens. This PR modifies that behavior so that tokens are generated before the padding for each element in the batch. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? I am adding those who I see more often in the git blame of the file: @patrickvonplaten, @patil-suraj, @yjernite <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> This is my first PR, so let me use it to thank everyone involved in this library for all the cool work :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10552/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10552", "html_url": "https://github.com/huggingface/transformers/pull/10552", "diff_url": "https://github.com/huggingface/transformers/pull/10552.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10552.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10551
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10551/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10551/comments
https://api.github.com/repos/huggingface/transformers/issues/10551/events
https://github.com/huggingface/transformers/pull/10551
823,246,380
MDExOlB1bGxSZXF1ZXN0NTg1NzM3Njc0
10,551
Added max_sample_ arguments
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[ { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" } ]
closed
false
null
[]
[ "Thank you for having a closer look that I did, @sgugger.\r\n\r\nIdeally we should have tests that would have caught this", "Hi @stas00,\r\n\r\nHow can we add test cases for this testing? If we check `max_train_samples` and `max_valid_samples` from metrics and add assert statement that might be possible.\r\n ", "> How can we add test cases for this testing? If we check `max_train_samples` and `max_valid_samples` from metrics and add assert statement that might be possible.\r\n\r\nYes, that's exactly the idea", "Hi @stas00,\r\n\r\nWhat should I do if I got this error while using git,\r\n```\r\n$ git push origin argument-addition\r\nTo https://github.com/bhadreshpsavani/transformers.git\r\n ! [rejected] argument-addition -> argument-addition (non-fast-forward)\r\nerror: failed to push some refs to 'https://github.com/bhadreshpsavani/transformers.git'\r\nhint: Updates were rejected because the tip of your current branch is behind\r\nhint: its remote counterpart. Integrate the remote changes (e.g.\r\nhint: 'git pull ...') before pushing again.\r\nhint: See the 'Note about fast-forwards' in 'git push --help' for details.\r\n```", "I found that I need to use this command `git push -f origin argument-addition` with fast-forward flag.\r\nThanks, @stas00 I used your rebase script. It's cool! I did it the first time!", "Some unrelated to your work CI tests were failing so I rebased your PR branch to master, and then they passed. You may have not noticed that.\r\n\r\nSo you needed to do `git pull` before continuing pushing. and if you already made some changes and `git pull` doesn't work because an update was made in files that you locally modify, you normally do:\r\n```\r\ngit stash\r\ngit pull\r\ngit stash pop\r\n```\r\nand deal with merge conflicts if any emerge.\r\n\r\nIn general force-pushing should only be reserved for when a bad mistake was made and you need to undo some damage.\r\n\r\nSo your force-pushing undid the changes I pushed. But since you then rebased it's the same as what I did. No damage done in this situation.\r\n\r\nBut please be careful in the future and first understand why you think of doing force pushing.", "Okay @stas00,\r\nI will be careful while using force push I will use `stash`.\r\nNow I understood", "Hello @stas00 and @sgugger,\r\nI have made the suggested changes\r\nPlease let me know if any other changes are required\r\nThanks", "@LysandreJik I think this is ready for final review and merge if you're happy with it." ]
1,614
1,615
1,615
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10437 #10423 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #10437 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Notes: All the PyTorch-based examples except the below two files will have support for the arguments by adding these changes. 1. The same changes can be implemented for `run_mlm_flax.py` but since I couldn't test the changes I didn't make changes to that file. 2. `run_generation.py` * I have reverted the code changes for three TF-based examples since it was giving an error and we want to keep it as it is. * Test/Predict code addition is still pending. I will do it next. ## review: @stas00 @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10551/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10551/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10551", "html_url": "https://github.com/huggingface/transformers/pull/10551", "diff_url": "https://github.com/huggingface/transformers/pull/10551.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10551.patch", "merged_at": 1615229830000 }
https://api.github.com/repos/huggingface/transformers/issues/10550
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10550/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10550/comments
https://api.github.com/repos/huggingface/transformers/issues/10550/events
https://github.com/huggingface/transformers/issues/10550
823,244,325
MDU6SXNzdWU4MjMyNDQzMjU=
10,550
How to get best model from hyperparameter search easily
{ "login": "sven-h", "id": 8777506, "node_id": "MDQ6VXNlcjg3Nzc1MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8777506?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sven-h", "html_url": "https://github.com/sven-h", "followers_url": "https://api.github.com/users/sven-h/followers", "following_url": "https://api.github.com/users/sven-h/following{/other_user}", "gists_url": "https://api.github.com/users/sven-h/gists{/gist_id}", "starred_url": "https://api.github.com/users/sven-h/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sven-h/subscriptions", "organizations_url": "https://api.github.com/users/sven-h/orgs", "repos_url": "https://api.github.com/users/sven-h/repos", "events_url": "https://api.github.com/users/sven-h/events{/privacy}", "received_events_url": "https://api.github.com/users/sven-h/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger ", "Yes there is nothing available for that right now. I believe the to `run_hp_search` functions should save the checkpoints of the non-aborted training and at least return the location of the best checkpoint in the BestRun namedtuple, as well as load the best model fine-tuned at the end if `load_best_model_at_end=True`. If you want to tackle this, we'd love to get a PR!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,614
1,619
1,619
NONE
null
Hi, after doing a hyperparameter search(by calling `hyperparameter_search`on the trainer object) , I asked myself how to easily get the best model out of it. Currently, I'm using Ray Tune as a backend. Given [the code in integrations.py](https://github.com/huggingface/transformers/blob/54e55b52d4886d4c63e592310b4253e01c606285/src/transformers/integrations.py#L238) the trial id, objective and chosen hyperparameters are stored in class [BestRun](https://github.com/huggingface/transformers/blob/54e55b52d4886d4c63e592310b4253e01c606285/src/transformers/trainer_utils.py#L116) which is then returned by the hyperparameter_search function. But here the model is somehow missing or am I wrong? One option would be to retrained from the given hyperparameters but this is not possible in PBT because the perturbation is applied during hyperparameter search (and cannot be repeated). The only thing I currently see is to load the model based on the run_id and compose the corresponding file path. But maybe there is an easier way to do it (or is this the expected way?). I also tried out the parameter `load_best_model_at_end=True` in [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) which is used in [this example](https://docs.ray.io/en/master/tune/examples/pbt_transformers.html) but it does not help. One proposal would be to add a parameter in BestRun which would contain the model or even better load it directly into the trainer (such that the `predict` and `save_model` functions also work). Is this a reasonable feature request? In case yes, I'd be happy to create a pull request. In other documentations they show how they extract the checkpoint directly: - [Ray Docs](https://docs.ray.io/en/master/tune/tutorials/tune-serve-integration-mnist.html#configuring-the-search-space-and-starting-ray-tune) ``` best_trial = analysis.get_best_trial("mean_accuracy", "max", "last") best_accuracy = best_trial.metric_analysis["mean_accuracy"]["last"] best_trial_config = best_trial.config best_checkpoint = best_trial.checkpoint.value ``` - [Colab example for PBT](https://colab.research.google.com/drive/1tQgAKgcKQzheoh503OzhS4N9NtfFgmjF?usp=sharing#scrollTo=TxKyvQ6WNlvG) ``` best_config = analysis.get_best_config(metric="eval_acc", mode="max") print(best_config) best_checkpoint = recover_checkpoint( analysis.get_best_trial(metric="eval_acc", mode="max").checkpoint.value) ``` Best regards Sven
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10550/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10549
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10549/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10549/comments
https://api.github.com/repos/huggingface/transformers/issues/10549/events
https://github.com/huggingface/transformers/pull/10549
823,243,343
MDExOlB1bGxSZXF1ZXN0NTg1NzM1MTkx
10,549
Fix embeddings for PyTorch 1.8
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,614
1,614
1,614
COLLABORATOR
null
# What does this PR do? This PR fixes several embeddings layer with the recent breaking change introduced in PyTorch 1.8. Up until PyTorch 1.7, the `padding_idx` passed to an embedding layer was used to initialize the corresponding row in the weights to 0 but ignored afterwards. Now, this `padding_idx` is used at every forward pass and ignores the potential weights of the padding index (spoiler alert, all pretrained models I checked have a nonzero one). To solve this, this PR removes all `padding_idx` passed to embeddings layer. As we were in any case re-initializing them in the `_init_weights` function, the zero weight for that index was ignored in any case. This PR thus introduces no breaking change on our side while dealing with the breaking change in PyTorch.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10549/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10549", "html_url": "https://github.com/huggingface/transformers/pull/10549", "diff_url": "https://github.com/huggingface/transformers/pull/10549.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10549.patch", "merged_at": 1614979128000 }
https://api.github.com/repos/huggingface/transformers/issues/10548
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10548/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10548/comments
https://api.github.com/repos/huggingface/transformers/issues/10548/events
https://github.com/huggingface/transformers/issues/10548
823,218,002
MDU6SXNzdWU4MjMyMTgwMDI=
10,548
Dead link to optuna.create_study under hyperparamter_search in Trainer
{ "login": "joawar", "id": 46854160, "node_id": "MDQ6VXNlcjQ2ODU0MTYw", "avatar_url": "https://avatars.githubusercontent.com/u/46854160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joawar", "html_url": "https://github.com/joawar", "followers_url": "https://api.github.com/users/joawar/followers", "following_url": "https://api.github.com/users/joawar/following{/other_user}", "gists_url": "https://api.github.com/users/joawar/gists{/gist_id}", "starred_url": "https://api.github.com/users/joawar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joawar/subscriptions", "organizations_url": "https://api.github.com/users/joawar/orgs", "repos_url": "https://api.github.com/users/joawar/repos", "events_url": "https://api.github.com/users/joawar/events{/privacy}", "received_events_url": "https://api.github.com/users/joawar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for flagging! Do you want to make a PR to fix it?", "> Thanks for flagging! Do you want to make a PR to fix it?\r\n\r\nI tried (#10554). Did I do it correctly?", "Looks okay to me! Let's just wait to check the tests pass. Thanks! :-)" ]
1,614
1,614
1,614
CONTRIBUTOR
null
I noticed the hyperlink to the documentation of optuna's create_study under ```kwargs``` in the ```hyperparameter_search``` method of Trainer is outdated. https://huggingface.co/transformers/main_classes/trainer.html ### Who can help Documentation: @sgugger New URL (I'm guessing): https://optuna.readthedocs.io/en/stable/reference/generated/optuna.study.create_study.html Old (currently used): https://optuna.readthedocs.io/en/stable/reference/alias_generated/optuna.create_study.html#optuna.create_study
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10548/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10548/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10547
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10547/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10547/comments
https://api.github.com/repos/huggingface/transformers/issues/10547/events
https://github.com/huggingface/transformers/pull/10547
823,181,673
MDExOlB1bGxSZXF1ZXN0NTg1Njg0NTQw
10,547
[Wav2Vec2 Example Script] Typo
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,614
1,614
1,614
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fix a typo. Script should be as generic as possible ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10547/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10547", "html_url": "https://github.com/huggingface/transformers/pull/10547", "diff_url": "https://github.com/huggingface/transformers/pull/10547.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10547.patch", "merged_at": 1614957432000 }
https://api.github.com/repos/huggingface/transformers/issues/10546
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10546/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10546/comments
https://api.github.com/repos/huggingface/transformers/issues/10546/events
https://github.com/huggingface/transformers/pull/10546
823,155,620
MDExOlB1bGxSZXF1ZXN0NTg1NjYzNDg1
10,546
Fix torch 1.8.0 segmentation fault
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,614
1,614
1,614
MEMBER
null
The ONNX test fails on PyTorch 1.8.0 due to a segmentation fault. This is a draft PR to try different things out.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10546/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10546/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10546", "html_url": "https://github.com/huggingface/transformers/pull/10546", "diff_url": "https://github.com/huggingface/transformers/pull/10546.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10546.patch", "merged_at": 1614964219000 }
https://api.github.com/repos/huggingface/transformers/issues/10545
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10545/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10545/comments
https://api.github.com/repos/huggingface/transformers/issues/10545/events
https://github.com/huggingface/transformers/pull/10545
823,095,141
MDExOlB1bGxSZXF1ZXN0NTg1NjExNjgw
10,545
Fixing conversation test for torch 1.8
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,614
1,614
1,614
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10545/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10545", "html_url": "https://github.com/huggingface/transformers/pull/10545", "diff_url": "https://github.com/huggingface/transformers/pull/10545.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10545.patch", "merged_at": 1614954254000 }
https://api.github.com/repos/huggingface/transformers/issues/10544
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10544/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10544/comments
https://api.github.com/repos/huggingface/transformers/issues/10544/events
https://github.com/huggingface/transformers/pull/10544
823,055,013
MDExOlB1bGxSZXF1ZXN0NTg1NTc2NTI0
10,544
Handle padding in decoder_inputs_id when using generate
{ "login": "LittlePea13", "id": 26126169, "node_id": "MDQ6VXNlcjI2MTI2MTY5", "avatar_url": "https://avatars.githubusercontent.com/u/26126169?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LittlePea13", "html_url": "https://github.com/LittlePea13", "followers_url": "https://api.github.com/users/LittlePea13/followers", "following_url": "https://api.github.com/users/LittlePea13/following{/other_user}", "gists_url": "https://api.github.com/users/LittlePea13/gists{/gist_id}", "starred_url": "https://api.github.com/users/LittlePea13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LittlePea13/subscriptions", "organizations_url": "https://api.github.com/users/LittlePea13/orgs", "repos_url": "https://api.github.com/users/LittlePea13/repos", "events_url": "https://api.github.com/users/LittlePea13/events{/privacy}", "received_events_url": "https://api.github.com/users/LittlePea13/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,614
1,614
1,614
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10478 for pytorch version of `generate()` in generate_utils.py As described in the issue, when `decoder_input_ids` have different lengths and require padding, generation continues after the padding tokens. This PR modifies that behavior so that tokens are generated before the padding for each element in the batch. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? I am adding those who I see more often in the git blame of the file: @patrickvonplaten, @patil-suraj, @yjernite <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> This is my first PR, so let me use it to thank everyone involved in this library for all the cool work :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10544/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10544", "html_url": "https://github.com/huggingface/transformers/pull/10544", "diff_url": "https://github.com/huggingface/transformers/pull/10544.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10544.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10543
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10543/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10543/comments
https://api.github.com/repos/huggingface/transformers/issues/10543/events
https://github.com/huggingface/transformers/issues/10543
823,022,900
MDU6SXNzdWU4MjMwMjI5MDA=
10,543
Similar issue like #1091 in Blenderbot
{ "login": "karthikgrama", "id": 22337077, "node_id": "MDQ6VXNlcjIyMzM3MDc3", "avatar_url": "https://avatars.githubusercontent.com/u/22337077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/karthikgrama", "html_url": "https://github.com/karthikgrama", "followers_url": "https://api.github.com/users/karthikgrama/followers", "following_url": "https://api.github.com/users/karthikgrama/following{/other_user}", "gists_url": "https://api.github.com/users/karthikgrama/gists{/gist_id}", "starred_url": "https://api.github.com/users/karthikgrama/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/karthikgrama/subscriptions", "organizations_url": "https://api.github.com/users/karthikgrama/orgs", "repos_url": "https://api.github.com/users/karthikgrama/repos", "events_url": "https://api.github.com/users/karthikgrama/events{/privacy}", "received_events_url": "https://api.github.com/users/karthikgrama/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Do you encounter any errors because of the mismatch in length?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,614
1,619
1,619
NONE
null
Tokenizer and model are not in sync. I am using "facebook/blenderbot-400M-distill" Tokenizer has 8009 base tokens where as model has 8008. Could you please help me with this? `from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration mname = "facebook/blenderbot-400M-distill" model = BlenderbotForConditionalGeneration.from_pretrained(mname) tokenizer = BlenderbotTokenizer.from_pretrained(mname, local_files_only=True) print(len(tokenizer)) print(model.config.to_dict()['vocab_size'])` Here is the output that I get. 8009 8008 ## Environment info - `transformers` version: 4.3.2 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.8.6 - PyTorch version (GPU?): 1.7.1+cpu (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10543/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10542
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10542/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10542/comments
https://api.github.com/repos/huggingface/transformers/issues/10542/events
https://github.com/huggingface/transformers/issues/10542
823,012,844
MDU6SXNzdWU4MjMwMTI4NDQ=
10,542
OSError: Can't load weights for 'facebook/mbart-large-cc25' when using TFMBartModel
{ "login": "SantiagoEG", "id": 12842728, "node_id": "MDQ6VXNlcjEyODQyNzI4", "avatar_url": "https://avatars.githubusercontent.com/u/12842728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SantiagoEG", "html_url": "https://github.com/SantiagoEG", "followers_url": "https://api.github.com/users/SantiagoEG/followers", "following_url": "https://api.github.com/users/SantiagoEG/following{/other_user}", "gists_url": "https://api.github.com/users/SantiagoEG/gists{/gist_id}", "starred_url": "https://api.github.com/users/SantiagoEG/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SantiagoEG/subscriptions", "organizations_url": "https://api.github.com/users/SantiagoEG/orgs", "repos_url": "https://api.github.com/users/SantiagoEG/repos", "events_url": "https://api.github.com/users/SantiagoEG/events{/privacy}", "received_events_url": "https://api.github.com/users/SantiagoEG/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! Yes, you can load the PyTorch weights into a Transformer model by adding `from_pt=True` in the `from_pretrained` method.", "Thank you very much @LysandreJik!\r\nI have tried two ways:\r\n\r\n**Option 1 Using from_pt = True**\r\n_bart_model = TFMBartModel.from_pretrained(\"facebook/mbart-large-cc25\", from_pt=True)_\r\n\r\nThis worked well, but the following message appeared:\r\nSome weights of the PyTorch model were not used when initializing the TF 2.0 model TFMBartModel: ['final_logits_bias', 'model.encoder.embed_tokens.weight', 'model.decoder.embed_tokens.weight']\r\n- This IS expected if you are initializing TFMBartModel from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing TFMBartModel from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).\r\nAll the weights of TFMBartModel were initialized from the PyTorch model.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use TFMBartModel for predictions without further training.\r\n\r\nFurthermore, I had memory problems when trying this way in GPU.\r\n\r\n**Option 2 Using MBartConfig**\r\n_configuration = MBartConfig(name_or_path = \"facebook/mbart-large-cc25\")\r\nbart_model = TFMBartModel(configuration)_\r\n\r\nI have checked the outputs for the same input, and they were different. So I think that Option 1 did not properly loaded all the weights. So, I recommend using Option 2! Hope this helps!\r\n\r\n", "Hi! The option 2 you mention isn't loading any weights on the model itself. You're instantiating a configuration that is similar to `facebook/mbart-large-cc25`, and initializing a model with random weights following that configuration.", "If you run inference twice through the model loaded with option 1, do you get different inputs?", "Hi @LysandreJik,\r\n\r\nThe outputs were different between option 1 & 2, but it is obvious if they do not load the same weights. But other fact is that when I load the model with option 1 several times, the outputs are different. Meanwhile, if I load once the model and predict twice, the outputs are the same. Could be it due to dropout? \r\n\r\nTo avoid memory problems with option 1, I am going to load the model in CPU and export the TF weights to h5 file. Then load them with GPU settings.", "**[CPU] Saving pretrained model**\r\nI have tried loading the pretrained mBART model in CPU settings and save it in TF formar with the following code:\r\n\r\n_mbart_cpu = TFMBartModel.from_pretrained(\"facebook/mbart-large-cc25\", from_pt=True)\r\nmbart_cpu.save_pretrained('saved_models/')_\r\n\r\nNo errors appeared\r\n\r\n**[GPU] Loading pretrained weights**\r\nAfter exporting pretrained mBART, I tried loading it with GPU settings as follows:\r\n\r\n_mbart_in_gpu = TFMBartModel.from_pretrained(\"saved_models\")_\r\n\r\nHowever, the following error appeared:\r\n\r\n**Traceback (most recent call last):**\r\n\r\n File \"G:\\Mi unidad\\D4.2\\Proof of Concept simCLR for MT\\load_in_GPU.py\", line 23, in <module>\r\n mbart_model_2 = TFMBartModel.from_pretrained(\"saved_models\")\r\n\r\n File \"c:\\users\\vicen\\anaconda3\\envs\\signon_2\\lib\\site-packages\\transformers\\modeling_tf_utils.py\", line 1244, in from_pretrained\r\n missing_keys, unexpected_keys = load_tf_weights(model, resolved_archive_file)\r\n\r\n File \"c:\\users\\vicen\\anaconda3\\envs\\signon_2\\lib\\site-packages\\transformers\\modeling_tf_utils.py\", line 532, in load_tf_weights\r\n K.batch_set_value(weight_value_tuples)\r\n\r\n File \"c:\\users\\vicen\\anaconda3\\envs\\signon_2\\lib\\site-packages\\tensorflow\\python\\util\\dispatch.py\", line 201, in wrapper\r\n return target(*args, **kwargs)\r\n\r\n File \"c:\\users\\vicen\\anaconda3\\envs\\signon_2\\lib\\site-packages\\tensorflow\\python\\keras\\backend.py\", line 3706, in batch_set_value\r\n x.assign(np.asarray(value, dtype=dtype(x)))\r\n\r\n File \"c:\\users\\vicen\\anaconda3\\envs\\signon_2\\lib\\site-packages\\tensorflow\\python\\ops\\resource_variable_ops.py\", line 892, in assign\r\n assign_op = gen_resource_variable_ops.assign_variable_op(\r\n\r\n File \"c:\\users\\vicen\\anaconda3\\envs\\signon_2\\lib\\site-packages\\tensorflow\\python\\ops\\gen_resource_variable_ops.py\", line 144, in assign_variable_op\r\n _ops.raise_from_not_ok_status(e, name)\r\n\r\n File \"c:\\users\\vicen\\anaconda3\\envs\\signon_2\\lib\\site-packages\\tensorflow\\python\\framework\\ops.py\", line 6862, in raise_from_not_ok_status\r\n six.raise_from(core._status_to_exception(e.code, message), None)\r\n\r\n File \"<string>\", line 3, in raise_from\r\n\r\n**InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run AssignVariableOp: Dst tensor is not initialized. [Op:AssignVariableOp]**\r\n\r\nThank you in advance for your help!\r\n", "Pinging @patrickvonplaten and @patil-suraj ", "Thank you very much @LysandreJik!", "Hey @SantiagoEG \r\n\r\nthe reason for the issue is that `TFMBartModel` is not TF counterpart of `MBartModel`, the counterpart is `TFMBartMainLayer`\r\n\r\nas you can see here\r\npt `MBartModel` : https://github.com/huggingface/transformers/blob/master/src/transformers/models/mbart/modeling_mbart.py#L1096\r\ntf `TFMBartModel`: https://github.com/huggingface/transformers/blob/master/src/transformers/models/mbart/modeling_tf_mbart.py#L1178\r\n\r\nthe structure is different, `TFMBartModel` does not contain the shared token embeddings layer, but instead, it wraps `TFMBartMainLayer `, which is why we canโ€™t do `TFMBartModel.from_pretrained(..., from_pt=True)`\r\ninstead, we need to load weights using `TFMBartForConditionalGeneration` and then we can load `TFMBartModel` using the saved `TFMBartForConditionalGeneration`\r\n\r\n```python\r\ntf_model = TFMBartForConditionalGeneration.from_pretrained(\"facebook/mbart-large-cc25\", from_pt=True)\r\ntf_model.save_pretrained(\"tf_model\")\r\n\r\ntf_mbart_model = TFMBartModel.from_pretrained(\"tf_model\")", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,614
1,619
1,619
NONE
null
## Environment info - `transformers` version: 4.3.3 - Platform: Windows 10 - Python version: 3.8 - PyTorch version (GPU?): - No - Tensorflow version (GPU?): 2.4.1, Yes - Using GPU in script?: Yes (I think it is indifferent for this issue) - Using distributed or parallel set-up in script?: No ## Issue Description Firstly, I would to thank you this extraordinary contribution to NLP. We are starting to apply transformers to our NLP problem and we want to test the pretrained mBART model. I have tried to load the TF version of this model following your documentation: https://huggingface.co/transformers/master/model_doc/mbart.html#tfmbartmodel Unfortunately, we are experiencing an error in _"model = TFMBartModel.from_pretrained('facebook/mbart-large-cc25')"_. **Error Traceback:** 404 Client Error: Not Found for url: https://huggingface.co/facebook/mbart-large-cc25/resolve/main/tf_model.h5 Traceback (most recent call last): File "c:\users\vicen\anaconda3\envs\signon_2\lib\site-packages\transformers\modeling_tf_utils.py", line 1203, in from_pretrained resolved_archive_file = cached_path( File "c:\users\vicen\anaconda3\envs\signon_2\lib\site-packages\transformers\file_utils.py", line 1078, in cached_path output_path = get_from_cache( File "c:\users\vicen\anaconda3\envs\signon_2\lib\site-packages\transformers\file_utils.py", line 1216, in get_from_cache r.raise_for_status() File "c:\users\vicen\anaconda3\envs\signon_2\lib\site-packages\requests\models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/facebook/mbart-large-cc25/resolve/main/tf_model.h5 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "G:\Mi unidad\D4.2\Proof of Concept simCLR for MT\test_tokenizer.py", line 46, in <module> bart_model = TFMBartModel.from_pretrained('facebook/mbart-large-cc25') File "c:\users\vicen\anaconda3\envs\signon_2\lib\site-packages\transformers\modeling_tf_utils.py", line 1219, in from_pretrained raise EnvironmentError(msg) OSError: Can't load weights for 'facebook/mbart-large-cc25'. Make sure that: - 'facebook/mbart-large-cc25' is a correct model identifier listed on 'https://huggingface.co/models' - or 'facebook/mbart-large-cc25' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. It seems that .h5 file with pretrained weights are not available in your repository. If it is not possible to update it, do you any way to transform pytorch .bin to TF .h5? Thank you in advance!!!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10542/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10541
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10541/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10541/comments
https://api.github.com/repos/huggingface/transformers/issues/10541/events
https://github.com/huggingface/transformers/issues/10541
822,953,463
MDU6SXNzdWU4MjI5NTM0NjM=
10,541
Facing Issue while running `run_tf_multiple_choice.py` from examples
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Even for the `run_tf_squad.py` script, I am facing the issue. \r\n\r\nHere is the [colab notebook](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/Check_run_tf_squad.ipynb) with issue and Traceback logs\r\n\r\nIs there anything else I need to use while running the script?", "Hello!\r\n\r\nThe multiple choice example needs to be reworked. A PR to fix the squad example is available https://github.com/huggingface/transformers/pull/10275. Be aware that some arguments are not implemented on the TF side.\r\n\r\nThe TF examples are under rework and should become more reliable in a near future.", "I am closing this issue since it is already WIP" ]
1,614
1,615
1,615
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.0.dev0 - Platform: Colab - Python version: NA - PyTorch version (GPU?): NA - Tensorflow version (GPU?): - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information I was trying to train `bert-base-cased` on Multiple Choice task with below script provided on Readme of the task ``` export SWAG_DIR=/path/to/swag_data_dir python ./examples/multiple-choice/run_tf_multiple_choice.py \ --task_name swag \ --model_name_or_path bert-base-cased \ --do_train \ --do_eval \ --data_dir $SWAG_DIR \ --learning_rate 5e-5 \ --num_train_epochs 3 \ --max_seq_length 80 \ --output_dir models_bert/swag_base \ --per_gpu_eval_batch_size=16 \ --per_device_train_batch_size=16 \ --gradient_accumulation_steps 2 \ --overwrite_output ``` I got below error ``` Invalid argument: ValueError: `generator` yielded an element of shape (4, 1, 80) where an element of shape (None, None) was expected. Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/script_ops.py", line 249, in __call__ ret = func(*args) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py", line 620, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 938, in generator_py_func (ret_array.shape, expected_shape)) ValueError: `generator` yielded an element of shape (4, 1, 80) where an element of shape (None, None) was expected. ``` The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) swag * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Follow this colab [Notebook ](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/CheckingTFScripts.ipynb) to run the script and reproduce the issue. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior It should execute the script and train the model without the given error. Note: The colab notebook given in the readme is not working, It's outdated maybe! **I removed `--logging-dir logs \` from the script because it was giving me another error** Tagging SMEs: @LysandreJik @jplu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10541/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10541/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10540
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10540/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10540/comments
https://api.github.com/repos/huggingface/transformers/issues/10540/events
https://github.com/huggingface/transformers/issues/10540
822,934,560
MDU6SXNzdWU4MjI5MzQ1NjA=
10,540
๐Ÿ› Bug in attention head mask for cross-attention module in encoder-decoder models
{ "login": "stancld", "id": 46073029, "node_id": "MDQ6VXNlcjQ2MDczMDI5", "avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stancld", "html_url": "https://github.com/stancld", "followers_url": "https://api.github.com/users/stancld/followers", "following_url": "https://api.github.com/users/stancld/following{/other_user}", "gists_url": "https://api.github.com/users/stancld/gists{/gist_id}", "starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stancld/subscriptions", "organizations_url": "https://api.github.com/users/stancld/orgs", "repos_url": "https://api.github.com/users/stancld/repos", "events_url": "https://api.github.com/users/stancld/events{/privacy}", "received_events_url": "https://api.github.com/users/stancld/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Unstale", "Hey @stancld,\r\n\r\nSorry for being so unresponsive here - I'm happy to change the behavior and provide 3 masks" ]
1,614
1,619
1,619
CONTRIBUTOR
null
Currently, encoder-decoder models use either `head_mask` or `decoder_head_mask` for masking attention heads in cross-attention modules. Both cases are not perfectly correct. Furthermore, MHA in cross-attention modules shares the parameters with the decoder, i.e. `shape = (decoder.num_layers, decoder.num_attention_heads)`, therefore, the usage of encoder `head_mask` in the cross-attention module may lead to errors due to the shape mismatch. <hr> **My contribution:** I will take care of this issue this weekend. <hr> **Reviewers:** @patil-suraj @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10540/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10540/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10539
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10539/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10539/comments
https://api.github.com/repos/huggingface/transformers/issues/10539/events
https://github.com/huggingface/transformers/issues/10539
822,912,158
MDU6SXNzdWU4MjI5MTIxNTg=
10,539
Wave2vec custom training tokenizer bug
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I will add a notebook on how to fine-tune Wav2Vec2 on languages other than English next week (think I'll also go for the German Common Voice dataset). We only today added the multi-lingual checkpoint, so you probably used the English checkpoint which cannot handle German. If you didn't see the notebook within ~1,2 weeks, please ping me here again", "Thanks for your answer\r\nFor understanding, why does the english checkpoint does not support german ?\r\nI didn't see any reason for that, where is the point I was blind at ?", "@flozi00 did you use a default English tokenizer/processor? You need to load a custom tokenizer, e.g. by `tokenizer = Wav2Vec2CTCTokenizer(vocab_file='path/to/custom/vocab.json')`, where vocab_file is the path to vocabulary of German characters. After that you can create a custom Wav2Vec2Processor:\r\n`processor = Wav2Vec2Processor(feature_extractor=Wav2Vec2FeatureExtractor.from_pretrained(\"facebook/wav2vec2-base\"), tokenizer=tokenizer)`\r\n\r\nRegarding the model, I'm not completely sure whether you can load the pretrained base model directly or instead convert the fairseq pytorch checkpoint manually. If I understand correctly, the script for converting Wav2Vec2 checkpoint requires letter dictionary (e.g. https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt). So if the Huggingface Transformers Wav2Vec2 model stores this letter dictionary, you probably need to convert the model manually with your own dict.ltr.txt with German letters included, as well as set the vocabulary size during conversion." ]
1,614
1,616
1,615
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: master - Platform: win 10 - Python version: 3.8 - PyTorch version (GPU?): GPU - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) --> @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] the official example scripts: (give details below) wave2vec Training example * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I was just running the wave2vec Training on pretrained base modell with the german common voice dataset. I modified the dataset that it fits into the format of librispeech so there are no changes in the example script. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> While Training the wer on eval dataset is exactly 1. After Training, while evaluation I recognized that the transcribed predictions are unk tokens only and the tokenizer is missing. I am using the pretrained tokenizer that's why. The words and sentences seems to be correct (counting the same on both sides), only instead of words only the unk tokens are returned after detokenizing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10539/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10539/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10538
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10538/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10538/comments
https://api.github.com/repos/huggingface/transformers/issues/10538/events
https://github.com/huggingface/transformers/issues/10538
822,904,612
MDU6SXNzdWU4MjI5MDQ2MTI=
10,538
Transfomer-xl padding token
{ "login": "Kouuh", "id": 46215418, "node_id": "MDQ6VXNlcjQ2MjE1NDE4", "avatar_url": "https://avatars.githubusercontent.com/u/46215418?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kouuh", "html_url": "https://github.com/Kouuh", "followers_url": "https://api.github.com/users/Kouuh/followers", "following_url": "https://api.github.com/users/Kouuh/following{/other_user}", "gists_url": "https://api.github.com/users/Kouuh/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kouuh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kouuh/subscriptions", "organizations_url": "https://api.github.com/users/Kouuh/orgs", "repos_url": "https://api.github.com/users/Kouuh/repos", "events_url": "https://api.github.com/users/Kouuh/events{/privacy}", "received_events_url": "https://api.github.com/users/Kouuh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,614
1,614
1,614
NONE
null
When dealing with a batch consisting of sequences of different lengths, how do I choose parameters so that padding_token is not involved in the computation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10538/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10538/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10537
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10537/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10537/comments
https://api.github.com/repos/huggingface/transformers/issues/10537/events
https://github.com/huggingface/transformers/pull/10537
822,904,187
MDExOlB1bGxSZXF1ZXN0NTg1NDUwNjcx
10,537
Fix example of custom Trainer to reflect signature of compute_loss
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not sure why the tests are failing since I only tweaked the docs - perhaps it's a problem with the CI on your end?" ]
1,614
1,614
1,614
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes out-of-date example of custom `Trainer` in docs. Since several people have asked about multi-label classification in the forum and in #10232 I thought it might be useful to use this as the example. I also took the liberty of tightening the grammar a bit in the preceding text ๐Ÿ˜ƒ ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? Forum link: https://discuss.huggingface.co/t/custom-loss-compute-loss-got-an-unexpected-keyword-argument-return-outputs/4148?u=lewtun ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10537/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10537", "html_url": "https://github.com/huggingface/transformers/pull/10537", "diff_url": "https://github.com/huggingface/transformers/pull/10537.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10537.patch", "merged_at": 1614948293000 }
https://api.github.com/repos/huggingface/transformers/issues/10536
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10536/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10536/comments
https://api.github.com/repos/huggingface/transformers/issues/10536/events
https://github.com/huggingface/transformers/pull/10536
822,876,096
MDExOlB1bGxSZXF1ZXN0NTg1NDI3NzM3
10,536
Enabling multilingual models for translation pipelines.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik \r\n\r\nI added a method here `deep_round` to try and make test equality a bit sane.\r\n\r\ntorchTensor(...) == torch.Tensor(..) does not work (understandably).\r\nAny sort of float comparison is also flaky.\r\n\r\n`deep_round` simply tries to make `assertEqual` work in a sane way for any sort of nested structure to make test comparisons simpler to read and write.\r\n\r\nHere the tests just need to make sure that the actual output of `_build_translation_input_ids` is actually correct, writing it in that way make its much more readable IMO (than having to extract each element and call `allclose` on them). It actually allowed me to see that mbart has a different encoding scheme.\r\n\r\nHow do you feel about such a function to improve small sanity checks ?\r\n\r\nAnother route would be to create something like `assertAlmostEqual` that behaves similarly but I think it's a bit less simple to reason about.\r\n\r\nFinally we could stick to not using any such helper functions.", "When this PR is ready, could you complete the description of the PR? It would help to understand what we're reviewing, and would be better for the release notes and posterity. Thanks!", "Good catch ! Updated and completed !\r\nI think this is ready to merge if you're ok with it.", "I'm asking @patrickvonplaten and @sgugger for review as they're more acquainted with mBART-like tokenizers and their review would be helpful." ]
1,614
1,618
1,618
CONTRIBUTOR
null
# What does this PR do? Enables mutlilingual translation for pipelines. Some models can target multiple languages/ language pairs. Before this PR, there was no simple way to exploit that within the Translation pipeline. ## Move away from `translation_XX_to_YY`. Because src_lang, tgt_lang pairs can be used for a single model, we need to move away from this task naming scheme. Currently it was done, as being able to use `src_lang` and `tgt_lang` both within `pipeline(.... src_lang=XX, tgt_lang=yy)` and at call time `translator = pipeline(...); translation("input string", src_lang=XX, tgt_lang=YY)` ## Rely on the model's tokenizer to build the inputs. We now have at least 3 (m2m. mbart50, T5) different models that prepare input ids in a different manner. In order to avoid switches within a pipeline that would depend on a model, instead `tokenizer` can optionally implement `_build_translation_input_ids`. That enables to put model specific logic within their model files and use all their custom methods over there. This is in line with https://github.com/huggingface/transformers/pull/10002. ## Misc - `ensure_tensor_on_device` now supports non tensor members - Added a `deep_round` test utility to enable testing nested structures that contain tensors, floats and so on. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10536/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10536/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10536", "html_url": "https://github.com/huggingface/transformers/pull/10536", "diff_url": "https://github.com/huggingface/transformers/pull/10536.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10536.patch", "merged_at": 1618565496000 }