repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 13,632 | closed | bug in transformers notebook (training from scratch)? | Hello there!
First of all, I cannot thank @Rocketknight1 enough for the amazing work he has been doing to create `tensorflow` versions of the notebooks. On my side, I have spent some time and money (colab pro) trying to tie the notebooks together to create a full classifier from scratch with the following steps:
1. train the tokenizer
2. train the language model
3. train de classification head.
Unfortunately, I run into two issues. You can use the fully working notebook pasted below.
First issue: by training my own tokenizer I actually get a `perplexity` (225) that is way worse than the example shown https://github.com/huggingface/notebooks/blob/new_tf_notebooks/examples/language_modeling-tf.ipynb when using
```
model_checkpoint = "bert-base-uncased"
datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
```
This is puzzling as the tokenizer should be fine-tuned to the data used in the original tf2 notebook!
Second, there seem to be some **python issue** when I try to fine-tune the language model I obtained above with a text classification head.
Granted, the `tokenizer` and the underlying `language model` have been trained on another dataset (the wikipedia dataset from the previous two tf2 notebook that is). See https://github.com/huggingface/notebooks/blob/new_tf_notebooks/examples/text_classification-tf.ipynb . However, I should at least get some valid output! Here the model is complaining about some collate function.
Could you please have a look @sgugger @LysandreJik @Rocketknight1 when you can? I would be very happy to contribute this notebook to the Hugging Face community (although most of the credits go to @Rocketknight1). There is a great demand for building language models and NLP tasks from scratch.
Thanks!!!!
Code below
---
get the most recent versions
```
!pip install git+https://github.com/huggingface/datasets.git
!pip install transformers
```
train tokenizer from scratch
```
from datasets import load_dataset
dataset = load_dataset("wikitext", name="wikitext-2-raw-v1", split="train")
batch_size = 1000
def batch_iterator():
for i in range(0, len(dataset), batch_size):
yield dataset[i : i + batch_size]["text"]
all_texts = [dataset[i : i + batch_size]["text"] for i in range(0, len(dataset), batch_size)]
from tokenizers import decoders, models, normalizers, pre_tokenizers, processors, trainers, Tokenizer
tokenizer = Tokenizer(models.WordPiece(unl_token="[UNK]"))
tokenizer.normalizer = normalizers.BertNormalizer(lowercase=True)
tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer()
special_tokens = ["[UNK]", "[PAD]", "[CLS]", "[SEP]", "[MASK]"]
trainer = trainers.WordPieceTrainer(vocab_size=25000, special_tokens=special_tokens)
tokenizer.train_from_iterator(batch_iterator(), trainer=trainer)
cls_token_id = tokenizer.token_to_id("[CLS]")
sep_token_id = tokenizer.token_to_id("[SEP]")
print(cls_token_id, sep_token_id)
tokenizer.post_processor = processors.TemplateProcessing(
single=f"[CLS]:0 $A:0 [SEP]:0",
pair=f"[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1",
special_tokens=[
("[CLS]", cls_token_id),
("[SEP]", sep_token_id),
],
)
tokenizer.decoder = decoders.WordPiece(prefix="##")
from transformers import BertTokenizerFast
mytokenizer = BertTokenizerFast(tokenizer_object=tokenizer)
```
causal language from scratch using my own tokenizer `mytokenizer`
```
model_checkpoint = "bert-base-uncased"
datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
def tokenize_function(examples):
return mytokenizer(examples["text"], truncation=True)
tokenized_datasets = datasets.map(
tokenize_function, batched=True, num_proc=4, remove_columns=["text"]
)
block_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
batch_size=1000,
num_proc=4,
)
from transformers import TFAutoModelForMaskedLM
model = TFAutoModelForMaskedLM.from_pretrained(model_checkpoint)
from transformers import create_optimizer, AdamWeightDecay
import tensorflow as tf
optimizer = AdamWeightDecay(lr=2e-5, weight_decay_rate=0.01)
def dummy_loss(y_true, y_pred):
return tf.reduce_mean(y_pred)
model.compile(optimizer=optimizer, loss={"loss": dummy_loss})
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=mytokenizer, mlm_probability=0.15, return_tensors="tf"
)
train_set = lm_datasets["train"].to_tf_dataset(
columns=["attention_mask", "input_ids", "labels"],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
validation_set = lm_datasets["validation"].to_tf_dataset(
columns=["attention_mask", "input_ids", "labels"],
shuffle=False,
batch_size=16,
collate_fn=data_collator,
)
model.fit(train_set, validation_data=validation_set, epochs=1)
import math
eval_results = model.evaluate(validation_set)[0]
print(f"Perplexity: {math.exp(eval_results):.2f}")
```
and fine tune a classification tasks
```
GLUE_TASKS = [
"cola",
"mnli",
"mnli-mm",
"mrpc",
"qnli",
"qqp",
"rte",
"sst2",
"stsb",
"wnli",
]
task = "sst2"
batch_size = 16
from datasets import load_dataset, load_metric
actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset("glue", actual_task)
metric = load_metric("glue", actual_task)
```
and now try to classify text
```
from transformers import AutoTokenizer
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mnli-mm": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
sentence1_key, sentence2_key = task_to_keys[task]
if sentence2_key is None:
print(f"Sentence: {dataset['train'][0][sentence1_key]}")
else:
print(f"Sentence 1: {dataset['train'][0][sentence1_key]}")
print(f"Sentence 2: {dataset['train'][0][sentence2_key]}")
def preprocess_function(examples):
if sentence2_key is None:
return mytokenizer(examples[sentence1_key], truncation=True)
return mytokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True)
pre_tokenizer_columns = set(dataset["train"].features)
encoded_dataset = dataset.map(preprocess_function, batched=True)
tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns)
print("Columns added by tokenizer:", tokenizer_columns)
validation_key = (
"validation_mismatched"
if task == "mnli-mm"
else "validation_matched"
if task == "mnli"
else "validation"
)
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["label"],
shuffle=True,
batch_size=16,
collate_fn=mytokenizer.pad,
)
tf_validation_dataset = encoded_dataset[validation_key].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["label"],
shuffle=False,
batch_size=16,
collate_fn=mytokenizer.pad,
)
from transformers import TFAutoModelForSequenceClassification
import tensorflow as tf
num_labels = 3 if task.startswith("mnli") else 1 if task == "stsb" else 2
if task == "stsb":
loss = tf.keras.losses.MeanSquaredError()
num_labels = 1
elif task.startswith("mnli"):
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_labels = 3
else:
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_labels = 2
model = TFAutoModelForSequenceClassification.from_pretrained(
model, num_labels=num_labels
)
from transformers import create_optimizer
num_epochs = 5
batches_per_epoch = len(encoded_dataset["train"]) // batch_size
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(
init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps
)
model.compile(optimizer=optimizer, loss=loss)
metric_name = (
"pearson"
if task == "stsb"
else "matthews_correlation"
if task == "cola"
else "accuracy"
)
def compute_metrics(predictions, labels):
if task != "stsb":
predictions = np.argmax(predictions, axis=1)
else:
predictions = predictions[:, 0]
return metric.compute(predictions=predictions, references=labels)
model.fit(
tf_train_dataset,
validation_data=tf_validation_dataset,
epochs=5,
callbacks=tf.keras.callbacks.EarlyStopping(patience=2),
)
predictions = model.predict(tf_validation_dataset)["logits"]
compute_metrics(predictions, np.array(encoded_dataset[validation_key]["label"]))
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d01ad7112f932f9c.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-de5efda680a1f856.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f3c1e00b7f03ba8.arrow
Sentence: hide new secretions from the parental units
Columns added by tokenizer: ['attention_mask', 'input_ids', 'token_type_ids']
---------------------------------------------------------------------------
VisibleDeprecationWarning Traceback (most recent call last)
<ipython-input-42-6eba4122302c> in <module>()
44 shuffle=True,
45 batch_size=16,
---> 46 collate_fn=mytokenizer.pad,
47 )
48 tf_validation_dataset = encoded_dataset[validation_key].to_tf_dataset(
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in _arrow_array_to_numpy(self, pa_array)
165 # cast to list of arrays or we end up with a np.array with dtype object
166 array: List[np.ndarray] = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
--> 167 return np.array(array, copy=False, **self.np_array_kwargs)
168
169
VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
```
What do you think? Happy to help if I can
Thanks!! | 09-17-2021 19:47:38 | 09-17-2021 19:47:38 | For the first issue you are training from scratch a new model versus fine-tuning one that has been pretrained on way more data. It's completely normal that the latter wins. As for the second one, I'm not sure you can directly use the tokenizer.pad method as a collation function.
Note that since you are copying the error messages, you should expand the intermediate frames so we can see where the error comes from.<|||||>thanks @sgugger could you please clarify what you mean by
> As for the second one, I'm not sure you can directly use the tokenizer.pad method as a collation function.
The call
```
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["label"],
shuffle=True,
batch_size=16,
collate_fn=mytokenizer.pad,
```
comes directly from the official tf2 notebook https://github.com/huggingface/notebooks/blob/new_tf_notebooks/examples/text_classification-tf.ipynb<|||||>expanded error here, thanks!
```
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d01ad7112f932f9c.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-de5efda680a1f856.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f3c1e00b7f03ba8.arrow
Sentence: hide new secretions from the parental units
{'input_ids': [[2, 11384, 1363, 3215, 1325, 1218, 1125, 10341, 1139, 3464, 3], [2, 4023, 1491, 15755, 16, 1520, 4610, 1128, 13221, 802, 3], [2, 1187, 13755, 1327, 2845, 1142, 18920, 802, 4245, 3168, 7806, 1542, 2569, 3796, 3], [2, 3419, 22353, 13782, 1145, 3802, 1125, 1913, 2493, 3], [2, 1161, 1125, 6802, 11823, 17, 1137, 17, 1125, 17, 1233, 3765, 802, 1305, 18029, 802, 1125, 21157, 1843, 14645, 1280, 1427, 3]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
Columns added by tokenizer: ['attention_mask', 'input_ids', 'token_type_ids']
ClassLabel(num_classes=2, names=['negative', 'positive'], names_file=None, id=None)
---------------------------------------------------------------------------
VisibleDeprecationWarning Traceback (most recent call last)
<ipython-input-56-ddb32272e3ba> in <module>()
47 shuffle=True,
48 batch_size=16,
---> 49 collate_fn=mytokenizer.pad,
50 )
51 tf_validation_dataset = encoded_dataset[validation_key].to_tf_dataset(
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in to_tf_dataset(self, columns, batch_size, shuffle, drop_remainder, collate_fn, collate_fn_args, label_cols, dummy_labels, prefetch)
349 return [tf.convert_to_tensor(arr) for arr in out_batch]
350
--> 351 test_batch = np_get_batch(np.arange(batch_size))
352
353 @tf.function(input_signature=[tf.TensorSpec(None, tf.int64)])
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in np_get_batch(indices)
323
324 def np_get_batch(indices):
--> 325 batch = dataset[indices]
326 out_batch = []
327 if collate_fn is not None:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1780 format_columns=self._format_columns,
1781 output_all_columns=self._output_all_columns,
-> 1782 format_kwargs=self._format_kwargs,
1783 )
1784
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
1769 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1770 formatted_output = format_table(
-> 1771 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1772 )
1773 return formatted_output
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
420 else:
421 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns)
--> 422 formatted_output = formatter(pa_table_to_format, query_type=query_type)
423 if output_all_columns:
424 if isinstance(formatted_output, MutableMapping):
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
196 return self.format_column(pa_table)
197 elif query_type == "batch":
--> 198 return self.format_batch(pa_table)
199
200 def format_row(self, pa_table: pa.Table) -> RowFormat:
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in format_batch(self, pa_table)
241
242 def format_batch(self, pa_table: pa.Table) -> dict:
--> 243 return self.numpy_arrow_extractor(**self.np_array_kwargs).extract_batch(pa_table)
244
245
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in extract_batch(self, pa_table)
152
153 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 154 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
155
156 def _arrow_array_to_numpy(self, pa_array: pa.Array) -> np.ndarray:
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in <dictcomp>(.0)
152
153 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 154 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
155
156 def _arrow_array_to_numpy(self, pa_array: pa.Array) -> np.ndarray:
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py in _arrow_array_to_numpy(self, pa_array)
165 # cast to list of arrays or we end up with a np.array with dtype object
166 array: List[np.ndarray] = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
--> 167 return np.array(array, copy=False, **self.np_array_kwargs)
168
169
VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
```<|||||>I'm sure @Rocketknight1 will know what's going on here :-)<|||||>waiting for @Rocketknight1 then! Thanks<|||||>@Rocketknight1 @sgugger interestingly running the same notebook today (with the new pip install that is) returns another error
Not sure what the issue is this time... Any ideas?
Thanks!
```
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
Sentence: hide new secretions from the parental units
{'input_ids': [[2, 11384, 1363, 3215, 1325, 1218, 1125, 10341, 1139, 3464, 3], [2, 4023, 1491, 15755, 16, 1520, 4610, 1128, 13221, 798, 3], [2, 1187, 13755, 1327, 2845, 1142, 18920, 798, 4245, 3168, 7806, 1542, 2569, 3796, 3], [2, 3419, 22351, 13782, 1145, 3802, 1125, 1913, 2493, 3], [2, 1161, 1125, 6802, 11823, 17, 1137, 17, 1125, 17, 1233, 3765, 798, 1305, 18030, 798, 1125, 21156, 1843, 14645, 1280, 1427, 3]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
100%
68/68 [00:04<00:00, 20.16ba/s]
100%
1/1 [00:00<00:00, 10.70ba/s]
100%
2/2 [00:00<00:00, 13.42ba/s]
Columns added by tokenizer: ['token_type_ids', 'input_ids', 'attention_mask']
ClassLabel(num_classes=2, names=['negative', 'positive'], names_file=None, id=None)
/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py:167: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return np.array(array, copy=False, **self.np_array_kwargs)
404 Client Error: Not Found for url: https://huggingface.co/%3Ctransformers.models.bert.modeling_tf_bert.TFBertForMaskedLM%20object%20at%200x7f1f29039850%3E/resolve/main/config.json
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
553 use_auth_token=use_auth_token,
--> 554 user_agent=user_agent,
555 )
6 frames
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)
1409 use_auth_token=use_auth_token,
-> 1410 local_files_only=local_files_only,
1411 )
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only)
1573 r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
-> 1574 r.raise_for_status()
1575 etag = r.headers.get("X-Linked-Etag") or r.headers.get("ETag")
/usr/local/lib/python3.7/dist-packages/requests/models.py in raise_for_status(self)
940 if http_error_msg:
--> 941 raise HTTPError(http_error_msg, response=self)
942
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/%3Ctransformers.models.bert.modeling_tf_bert.TFBertForMaskedLM%20object%20at%200x7f1f29039850%3E/resolve/main/config.json
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-6-ddb32272e3ba> in <module>()
73
74 model = TFAutoModelForSequenceClassification.from_pretrained(
---> 75 model, num_labels=num_labels
76 )
77
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
395 if not isinstance(config, PretrainedConfig):
396 config, kwargs = AutoConfig.from_pretrained(
--> 397 pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
398 )
399 if hasattr(config, "auto_map") and cls.__name__ in config.auto_map:
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
525 """
526 kwargs["_from_auto"] = True
--> 527 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
528 if "model_type" in config_dict:
529 config_class = CONFIG_MAPPING[config_dict["model_type"]]
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
568 msg += f"- or '{revision}' is a valid git identifier (branch name, a tag name, or a commit id) that exists for this model name as listed on its model page on 'https://huggingface.co/models'\n\n"
569
--> 570 raise EnvironmentError(msg)
571
572 except json.JSONDecodeError:
OSError: Can't load config for '<transformers.models.bert.modeling_tf_bert.TFBertForMaskedLM object at 0x7f1f29039850>'. Make sure that:
- '<transformers.models.bert.modeling_tf_bert.TFBertForMaskedLM object at 0x7f1f29039850>' is a correct model identifier listed on 'https://huggingface.co/models'
- or '<transformers.models.bert.modeling_tf_bert.TFBertForMaskedLM object at 0x7f1f29039850>' is the correct path to a directory containing a config.json file
```<|||||>Hi @randomgambit, sorry for the lengthy delay in replying again! I'm still making changes to some of the lower-level parts of the library, so these notebooks haven't been fully finalized yet.
The `VisibleDeprecationWarning` in your first post is something that will hopefully be fixed by upcoming changes to `datasets`, but for now you can just ignore it.
The error you're getting in your final post is, I think, caused by you overwriting the variable `model` in your code. The `from_pretrained()` method expects a string like `bert-base-cased`, but it seems like you've created an actual TF model with that variable name. If you pass an actual model object to `from_pretrained()` it'll get very confused - so make sure that whatever argument you're passing there is a string and not something else!<|||||>thanks @Rocketknight1, super useful as usual. So what you are saying is that I should have saved my tokenizer `mytokenizer` and my language model `model` using `save_pretrained()`, and then I need to load the model with a classification head using `TFAutoModelForSequenceClassification`, right?
```
model.save_pretrained('mymodel')
mytokenizer.save_pretrained('mytokenizer')
model = TFAutoModelForSequenceClassification.from_pretrained(
'mymodel', num_labels=num_labels
)
```
This seems to work. I will try to adapt the code so that both the tokenization and the language model are performed on the dataset actually used in the classidication task `(dataset = load_dataset("glue", "sst2")`. Do you mind having a look when i'm done? This will be a super useful notebook for everyone.
Thanks!<|||||>@Rocketknight1 @sgugger I can confirm the new TF notebook works beautifully! Thanks! Just a follow up though: I tried to fine-tune a `longformer` model and everything works smoothly until the `model.fit` call, where I get a cryptic message
This is the model I use:
```
task = "sst2"
model_checkpoint = "allenai/longformer-large-4096"
batch_size = 16
```
and then you can run the default notebook https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb until you reach the end
```
model.fit(
tf_train_dataset,
validation_data=tf_validation_dataset,
epochs=3)
Epoch 1/3
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-4075d9d9fb81> in <module>()
3 tf_train_dataset,
4 validation_data=tf_validation_dataset,
----> 5 epochs=3)
9 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1182 _r=1):
1183 callbacks.on_train_batch_begin(step)
-> 1184 tmp_logs = self.train_function(iterator)
1185 if data_handler.should_sync:
1186 context.async_wait()
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
883
884 with OptionalXlaContext(self._jit_compile):
--> 885 result = self._call(*args, **kwds)
886
887 new_tracing_count = self.experimental_get_tracing_count()
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
922 # In this case we have not created variables on the first call. So we can
923 # run the first trace but we should fail if variables are created.
--> 924 results = self._stateful_fn(*args, **kwds)
925 if self._created_variables and not ALLOW_DYNAMIC_VARIABLE_CREATION:
926 raise ValueError("Creating variables on a non-first call to a function"
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
3036 with self._lock:
3037 (graph_function,
-> 3038 filtered_flat_args) = self._maybe_define_function(args, kwargs)
3039 return graph_function._call_flat(
3040 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3458 call_context_key in self._function_cache.missed):
3459 return self._define_function_with_shape_relaxation(
-> 3460 args, kwargs, flat_args, filtered_flat_args, cache_key_context)
3461
3462 self._function_cache.missed.add(call_context_key)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _define_function_with_shape_relaxation(self, args, kwargs, flat_args, filtered_flat_args, cache_key_context)
3380
3381 graph_function = self._create_graph_function(
-> 3382 args, kwargs, override_flat_arg_shapes=relaxed_arg_shapes)
3383 self._function_cache.arg_relaxed[rank_only_cache_key] = graph_function
3384
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3306 arg_names=arg_names,
3307 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3308 capture_by_value=self._capture_by_value),
3309 self._function_attributes,
3310 function_spec=self.function_spec,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
1005 _, original_func = tf_decorator.unwrap(python_func)
1006
-> 1007 func_outputs = python_func(*func_args, **func_kwargs)
1008
1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
666 # the function a weak reference to itself to avoid a reference cycle.
667 with OptionalXlaContext(compile_with_xla):
--> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
669 return out
670
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
992 except Exception as e: # pylint:disable=broad-except
993 if hasattr(e, "ag_error_metadata"):
--> 994 raise e.ag_error_metadata.to_exception(e)
995 else:
996 raise
TypeError: in user code:
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:853 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/transformers/models/longformer/modeling_tf_longformer.py:2408 call *
inputs["global_attention_mask"] = tf.tensor_scatter_nd_update(
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper **
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/array_ops.py:5755 tensor_scatter_nd_update
tensor=tensor, indices=indices, updates=updates, name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/gen_array_ops.py:11311 tensor_scatter_update
updates=updates, name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/op_def_library.py:558 _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'updates' of 'TensorScatterUpdate' Op has type int32 that does not match type int64 of argument 'tensor'.
```
Maybe there is something specific to `longformer` that does not work well with the current notebook? What do you all think?
Thanks!<|||||>@Rocketknight1 I know you are busy (and I cannot thank you enough for the magnificent TF notebooks!) but I wanted to let you know that I also have tried with `allenai/longformer-base-4096` and I am getting the same `int64` error. Please let me know if I can do anything to help you out.
Thanks!<|||||>Hi @Rocketknight1 I hope all is well!
I know wonder if `longformer` can be trained at all with this notebook. Indeed, I read that
`This notebook is built to run on any of the tasks in the list above, with any model checkpoint from the Model Hub as long as that model has a version with a classification head.`
If so, could you please tell me which TF notebook I need to adapt to make it work?
Thanks!!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Have you found any solution @randomgambit? Running into this myself.<|||||>i'll try passing in zeros cast to `int32` to the `global_attention_mask` param to `fit` and see if that helps. the `tf.zeros_like` used by `transformers` to generate the mask (when none are passed in by the user) must default to `int64`?<|||||>@randomgambit try the opposite of what I said above. You need to cast your `input_ids` to `tf.int32`. something like this should work:
```
input_ids = tf.convert_to_tensor([tf.convert_to_tensor(row, dtype=tf.int32)
for row in input_ids], dtype=tf.int32)
```
it would probably work via equivalent `numpy` methods, but I haven't tried that yet. the default dtype for `tf.zeros_like` is `tf.int32` (transformers makes `global_attention_mask` using `tf.zeros_like` for you if you don't pass it in).
you could probably also create the `global_attention_mask` yourself as dtype `tf.int64`. point being i think they all just need to be the same type.
we can probably close this @Rocketknight1 <|||||>thanks @jmwoloso, I initially didn't see your message. I am hoping @Rocketknight1 can just confirm all is good before closing... Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Ran into the same problem. I am totally lost.
Here is what I did
`import numpy as np
my_dict = {'text': ["random text 1", "random text 2", "random text 3"],
'label': [0, 0, 1]}
from datasets import Dataset
dataset = Dataset.from_dict(my_dict)`
`
from transformers import LongformerTokenizer, TFLongformerForSequenceClassification
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
def tokenize_function(examples):
r=tokenizer(examples["text"], padding="max_length", truncation=True)
r['input_ids']= [tf.convert_to_tensor(row, dtype=tf.int32)
for row in r['input_ids']]
r['attention_mask']= [tf.convert_to_tensor(row, dtype=tf.int32)
for row in r['attention_mask']]
return r
tokenized_datasets = dataset.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets.shuffle(seed=42)
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator(return_tensors="tf")
tf_train_dataset = small_train_dataset.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=["labels"],
shuffle=True,
collate_fn=data_collator,
batch_size=8,
)
model.fit(tf_train_dataset, batch_size=1)
`
@randomgambit and @jmwoloso any ideas?<|||||>@ichenjia There were a few errors mentioned throughout this thread. Which one are you seeing?<|||||>Thank you. It’s the last error related to int32 and int64
On Sat, Sep 3, 2022 at 11:20 PM Jason Wolosonovich ***@***.***>
wrote:
> @ichenjia <https://github.com/ichenjia> There were a few errors mentioned
> throughout this thread. Which one are you seeing?
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/13632#issuecomment-1236269062>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AA4MCGZ7QRFZSF7GSGCBS3DV4Q5TVANCNFSM5EIHFCCA>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>@ichenjia Did you try my solution of casting your `input_ids` to `tf.int32`?<|||||>> @ichenjia Did you try my solution of casting your `input_ids` to `tf.int32`?
Thank you. Here is what I did per the earlier tip from this thread
`r['input_ids']= [tf.convert_to_tensor(row, dtype=tf.int32)
for row in r['input_ids']]
r['attention_mask']= [tf.convert_to_tensor(row, dtype=tf.int32) `
In the tokenizer function mapped to dataset
I still got that int32 error. Did I do something wrong?<|||||>@jmwoloso
After reading the source code of Dataset, I think the problem is in the to_tf_dataset function, which called
`_get_output_signature` LN 290-303
```
if np.issubdtype(np_arrays[0].dtype, np.integer) or np_arrays[0].dtype == bool:
tf_dtype = tf.int64
np_dtype = np.int64
elif np.issubdtype(np_arrays[0].dtype, np.number):
tf_dtype = tf.float32
np_dtype = np.float32
elif np_arrays[0].dtype.kind == "U": # Unicode strings
np_dtype = np.unicode_
tf_dtype = tf.string
else:
raise RuntimeError(
f"Unrecognized array dtype {np_arrays[0].dtype}. \n"
"Nested types and image/audio types are not supported yet."
)
```
It forces a tf.int64 instead of tf.int32. It doesn't look like we have any control over it outside the API<|||||>There are always more layers, it seems @ichenjia :) I think we definitely have some control, or at least a way to hack it to prove the theory (thanks Python!). Could you try something like below as a temporary work around to see if it solves it?
I haven't looked at the source extensively, but maybe as a permanent fix we could add some dtype checking in `_get_output_signature` of the dataset in order to preserve what is passed in, but I'd defer to the HF crew on what, if anything, could/should be done assuming this hack works.
But until then, maybe this will help. We can try overriding that private method. (Also, to get the markdown formatting to show as a script, enclose your code with 3 backticks instead of 1).
*Edit was to fix formatting
```python
import types
import numpy as np
def _get_output_signature(
dataset: "Dataset",
collate_fn: Callable,
collate_fn_args: dict,
cols_to_retain: Optional[List[str]] = None,
batch_size: Optional[int] = None,
num_test_batches: int = 10,
):
"""Private method used by `to_tf_dataset()` to find the shapes and dtypes of samples from this dataset
after being passed through the collate_fn. Tensorflow needs an exact signature for tf.numpy_function, so
the only way to do this is to run test batches - the collator may add or rename columns, so we can't figure
it out just by inspecting the dataset.
Args:
dataset (:obj:`Dataset`): Dataset to load samples from.
collate_fn(:obj:`bool`): Shuffle the dataset order when loading. Recommended True for training, False for
validation/evaluation.
collate_fn(:obj:`Callable`): A function or callable object (such as a `DataCollator`) that will collate
lists of samples into a batch.
collate_fn_args (:obj:`Dict`): A `dict` of keyword arguments to be passed to the
`collate_fn`.
batch_size (:obj:`int`, optional): The size of batches loaded from the dataset. Used for shape inference.
Can be None, which indicates that batch sizes can be variable.
Returns:
:obj:`dict`: Dict mapping column names to tf.Tensorspec objects
:obj:`dict`: Dict mapping column names to np.dtype objects
"""
if config.TF_AVAILABLE:
import tensorflow as tf
else:
raise ImportError("Called a Tensorflow-specific function but Tensorflow is not installed.")
if len(dataset) == 0:
raise ValueError("Unable to get the output signature because the dataset is empty.")
if batch_size is None:
test_batch_size = min(len(dataset), 8)
else:
batch_size = min(len(dataset), batch_size)
test_batch_size = batch_size
test_batches = []
for _ in range(num_test_batches):
indices = sample(range(len(dataset)), test_batch_size)
test_batch = dataset[indices]
if cols_to_retain is not None:
test_batch = {
key: value
for key, value in test_batch.items()
if key in cols_to_retain or key in ("label_ids", "label")
}
test_batch = [{key: value[i] for key, value in test_batch.items()} for i in range(test_batch_size)]
test_batch = collate_fn(test_batch, **collate_fn_args)
test_batches.append(test_batch)
tf_columns_to_signatures = {}
np_columns_to_dtypes = {}
for column in test_batches[0].keys():
raw_arrays = [batch[column] for batch in test_batches]
# In case the collate_fn returns something strange
np_arrays = []
for array in raw_arrays:
if isinstance(array, np.ndarray):
np_arrays.append(array)
elif isinstance(array, tf.Tensor):
np_arrays.append(array.numpy())
else:
np_arrays.append(np.array(array))
if np.issubdtype(np_arrays[0].dtype, np.integer) or np_arrays[0].dtype == bool:
tf_dtype = tf.int32 # formerly tf.int64
np_dtype = np.int32 # formerly tf.int64
elif np.issubdtype(np_arrays[0].dtype, np.number):
tf_dtype = tf.float32
np_dtype = np.float32
elif np_arrays[0].dtype.kind == "U": # Unicode strings
np_dtype = np.unicode_
tf_dtype = tf.string
else:
raise RuntimeError(
f"Unrecognized array dtype {np_arrays[0].dtype}. \n"
"Nested types and image/audio types are not supported yet."
)
shapes = [array.shape for array in np_arrays]
static_shape = []
for dim in range(len(shapes[0])):
sizes = set([shape[dim] for shape in shapes])
if dim == 0:
static_shape.append(batch_size)
continue
if len(sizes) == 1: # This dimension looks constant
static_shape.append(sizes.pop())
else: # Use None for variable dimensions
static_shape.append(None)
tf_columns_to_signatures[column] = tf.TensorSpec(shape=static_shape, dtype=tf_dtype)
np_columns_to_dtypes[column] = np_dtype
return tf_columns_to_signatures, np_columns_to_dtypes
my_dict = {'text': ["random text 1", "random text 2", "random text 3"],
'label': [0, 0, 1]}
from datasets import Dataset
dataset = Dataset.from_dict(my_dict)
from transformers import LongformerTokenizer, TFLongformerForSequenceClassification
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
def tokenize_function(examples):
r=tokenizer(examples["text"], padding="max_length", truncation=True)
r['input_ids']= [tf.convert_to_tensor(row, dtype=tf.int32)
for row in r['input_ids']]
r['attention_mask']= [tf.convert_to_tensor(row, dtype=tf.int32)
for row in r['attention_mask']]
return r
tokenized_datasets = dataset.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets.shuffle(seed=42)
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator(return_tensors="tf")
# override our instance method
tf_train_dataset._get_output_signature = types.MethodType(_get_output_signature, tf_train_dataset)
tf_train_dataset = small_train_dataset.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=["labels"],
shuffle=True,
collate_fn=data_collator,
batch_size=8,
)
model.fit(tf_train_dataset, batch_size=1)
```<|||||>Hi @jmwoloso @ichenjia, sorry for only seeing this now! Just to clarify, are you encountering difficulties passing `tf.int64` values to `TFLongFormer`? You're correct that the `to_tf_dataset` and `prepare_tf_dataset` methods cast all int outputs to `tf.int64`, but this is because our policy is that our models should always accept `tf.int64` for any integer tensor inputs. If you're encountering issues with that, it's more likely a bug in LongFormer than in `to_tf_dataset`! <|||||>Hi @Rocketknight1 thanks for the reply. That all makes sense. This thread has kind of morphed, but I believe you solved the original issue which dealt with trying to pass ragged tensors to the model.
The next issue that came up from that was that the `TensorScatterUpdate` op in TF expects `tf.int32` inputs (according to the traceback) but was getting `tf.int64`. That originates in the `modeling_tf_longformer.py` module when the `global_attention_mask` is created.
I can take a look and see if there is anything to be done in that longformer file, but this seems like a lower-level TF op issue to me. But you are the TF scape-GOAT around here, so I'll defer to your guidance/wisdom :)<|||||>Hi @jmwoloso, the code for TFLongformer was indeed using lots of `tf.int32`, which it shouldn't. Our tests weren't picking that up for some reason - I'll have to investigate that later. For now, can you try the PR and let me know if it fixes your issues? You can install from the PR branch with `pip install --upgrade git+https://github.com/huggingface/transformers.git@fix_tflongformer_int_dtype`<|||||>Thanks @Rocketknight1! @ichenjia see if that solves your issue.
> Hi @jmwoloso, the code for TFLongformer was indeed using lots of `tf.int32`, which it shouldn't. Our tests weren't picking that up for some reason - I'll have to investigate that later. For now, can you try the PR and let me know if it fixes your issues? You can install from the PR branch with `pip install --upgrade git+https://github.com/huggingface/transformers.git@fix_tflongformer_int_dtype`
<|||||>Thank you @Rocketknight1 and @jmwoloso for the clear explanation and your check-in does solve the int32 issue. However, I think the check-in may have brought int another issue.
My understanding is that the global_attention_mask is calculated at run-time instead of being provided, which is also marked as Optional in the API.
So when I call
`model.fit(tf_train_dataset, batch_size=1)`
The following line was called:
`longformer/modeling_tf_longformer.py:2391 call *
global_attention_mask = tf.cast(global_attention_mask, tf.int64)`
and the following error occurred
`python3.8/site-packages/tensorflow/python/framework/tensor_util.py:445 make_tensor_proto
raise ValueError("None values not supported.")
ValueError: None values not supported.`
I am guessing global_attention_mask was forcefully cast even though None was provided.
Is that correct understanding?
<|||||>@ichenjia can you try explicitly passing in the `global_attention_mask`? I believe it ends up just being constructed on the fly with `tf.zeroes_like` method so maybe you could try that to get you unstuck?<|||||>> @ichenjia can you try explicitly passing in the `global_attention_mask`? I believe it ends up just being constructed on the fly with `tf.zeroes_like` method so maybe you could try that to get you unstuck?
Thank you @jmwoloso
I manually created a global attention mask in the tokenizer function:
```
from transformers import LongformerTokenizer, TFLongformerForSequenceClassification
import tensorflow as tf
import pickle
import numpy as np
from transformers import DefaultDataCollator
tf.data.experimental.enable_debug_mode()
#tf.config.experimental_run_functions_eagerly(True)
tf.config.run_functions_eagerly(True)
import numpy as np
my_dict = {'text': ["random text 1", "randome text 2", "beautiful randome text 3"],
'label': [0,0,1]}
from datasets import Dataset
dataset = Dataset.from_dict(my_dict)
def tokenize_function(examples):
r=tokenizer(examples["text"], padding="max_length", truncation=True)
global_attention_masks=[[1]*len(r['attention_mask'][0])]*len(r['attention_mask'])
r['global_attention_mask']=global_attention_masks
return r
tokenized_datasets = dataset.map(tokenize_function, batched=True)
data_collator = DefaultDataCollator(return_tensors="tf")
tf_train_dataset = tokenized_datasets.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids", 'global_attention_mask'],
label_cols=["labels"],
shuffle=True,
collate_fn=data_collator,
batch_size=1
)
tf.data.experimental.enable_debug_mode()
tf.config.experimental_run_functions_eagerly(True)
model = TFLongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096', num_labels=2)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.metrics.SparseCategoricalAccuracy(),
)
model.fit(tf_train_dataset, batch_size=1)
```
It immediately produced an OOM error
`ResourceExhaustedError: OOM when allocating tensor with shape[12,16,196864] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:StridedSlice] name: tf_longformer_for_sequence_classification/longformer/encoder/layer_._5/attention/self/strided_slice/`
I have a Titan RTX with 24GB of VRAM on that GPU. How much RAM does this need? Am I doing something wrong with again?
<|||||>ahhh...`Longformer` is pretty chunky, that's for sure. Have you tried `BigBird` (`google/bigbird-roberta-base`) by chance @ichenjia?<|||||>That doesn't solve this particular issue, but while we look into fixing it, I'm assuming your need is to handle longer sequence lengths than the typical Bert-like models are pre-trained on.<|||||>> ahhh...`Longformer` is pretty chunky, that's for sure. Have you tried `BigBird` (`google/bigbird-roberta-base`) by chance @ichenjia?
Thanks! I have not tried it because it only supports Torch not TF right? <|||||>You are talking about https://huggingface.co/docs/transformers/v4.21.3/en/model_doc/big_bird#transformers.BigBirdForSequenceClassification
right?<|||||>yeah, you're right...I assumed the TF-flavor of BigBird would have been the easiest lift to implement, but maybe not. can you revert back @Rocketknight1's PR and run it again, but post the entire output/traceback so I can take a look @ichenjia?
EDIT: I mean use his PR again and try running your script again without explicitly making and passing in the `global_attention_mask` and post the output/traceback here and I can probably get you a fix.<|||||>Thank you for trying to get to the bottom of it. Here is the code I ran:
```
from transformers import LongformerTokenizer, TFLongformerForSequenceClassification
import tensorflow as tf
import pickle
import numpy as np
from transformers import DefaultDataCollator
tf.data.experimental.enable_debug_mode()
tf.config.run_functions_eagerly(True)
import numpy as np
my_dict = {'text': ["random text 1", "randome text 2", "beautiful randome text 3"],
'label': [0,0,1]}
from datasets import Dataset
dataset = Dataset.from_dict(my_dict)
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
def tokenize_function(examples):
r=tokenizer(examples["text"], padding="max_length", truncation=True)
#global_attention_masks=[[1]*len(r['attention_mask'][0])]*len(r['attention_mask'])
#r['global_attention_mask']=global_attention_masks
return r
tokenized_datasets = dataset.map(tokenize_function, batched=True)
data_collator = DefaultDataCollator(return_tensors="tf")
tf_train_dataset = tokenized_datasets.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids", 'global_attention_mask'],
label_cols=["labels"],
shuffle=True,
collate_fn=data_collator,
batch_size=1
)
model = TFLongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096', num_labels=2)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.metrics.SparseCategoricalAccuracy(),
)
model.fit(tf_train_dataset, batch_size=1)
```
and here is the track:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-fccdbd4c6c6d> in <module>
5 metrics=tf.metrics.SparseCategoricalAccuracy(),
6 )
----> 7 model.fit(tf_train_dataset, batch_size=1)
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1182 _r=1):
1183 callbacks.on_train_batch_begin(step)
-> 1184 tmp_logs = self.train_function(iterator)
1185 if data_handler.should_sync:
1186 context.async_wait()
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/keras/engine/training.py in train_function(iterator)
851 def train_function(iterator):
852 """Runs a training execution with one step."""
--> 853 return step_function(self, iterator)
854
855 else:
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/keras/engine/training.py in step_function(model, iterator)
840
841 data = next(iterator)
--> 842 outputs = model.distribute_strategy.run(run_step, args=(data,))
843 outputs = reduce_per_replica(
844 outputs, self.distribute_strategy, reduction='first')
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py in run(***failed resolving arguments***)
1284 fn = autograph.tf_convert(
1285 fn, autograph_ctx.control_status_ctx(), convert_by_default=False)
-> 1286 return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
1287
1288 def reduce(self, reduce_op, value, axis):
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py in call_for_each_replica(self, fn, args, kwargs)
2847 kwargs = {}
2848 with self._container_strategy().scope():
-> 2849 return self._call_for_each_replica(fn, args, kwargs)
2850
2851 def _call_for_each_replica(self, fn, args, kwargs):
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py in _call_for_each_replica(self, fn, args, kwargs)
3630 def _call_for_each_replica(self, fn, args, kwargs):
3631 with ReplicaContext(self._container_strategy(), replica_id_in_sync_group=0):
-> 3632 return fn(*args, **kwargs)
3633
3634 def _reduce_to(self, reduce_op, value, destinations, options):
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
595 def wrapper(*args, **kwargs):
596 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.UNSPECIFIED):
--> 597 return func(*args, **kwargs)
598
599 if inspect.isfunction(func) or inspect.ismethod(func):
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/keras/engine/training.py in run_step(data)
833
834 def run_step(data):
--> 835 outputs = model.train_step(data)
836 # Ensure counter is updated only if `train_step` succeeds.
837 with tf.control_dependencies(_minimum_control_deps(outputs)):
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in train_step(self, data)
1390 # Run forward pass.
1391 with tf.GradientTape() as tape:
-> 1392 y_pred = self(x, training=True)
1393 if self._using_dummy_loss:
1394 loss = self.compiled_loss(y_pred.loss, y_pred.loss, sample_weight, regularization_losses=self.losses)
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
1035 with autocast_variable.enable_auto_cast_variables(
1036 self._compute_dtype_object):
-> 1037 outputs = call_fn(inputs, *args, **kwargs)
1038
1039 if self._activity_regularizer:
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
405
406 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 407 return func(self, **unpacked_inputs)
408
409 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/transformers/models/longformer/modeling_tf_longformer.py in call(self, input_ids, attention_mask, head_mask, token_type_ids, position_ids, global_attention_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, labels, training)
2389 global_attention_mask = tf.convert_to_tensor(global_attention_mask, dtype=tf.int64)
2390 else:
-> 2391 global_attention_mask = tf.cast(global_attention_mask, tf.int64)
2392
2393 if global_attention_mask is None and input_ids is not None:
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
204 """Call target, and fall back on dispatchers if there is a TypeError."""
205 try:
--> 206 return target(*args, **kwargs)
207 except (TypeError, ValueError):
208 # Note: convert_to_eager_tensor currently raises a ValueError, not a
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py in cast(x, dtype, name)
986 # allows some conversions that cast() can't do, e.g. casting numbers to
987 # strings.
--> 988 x = ops.convert_to_tensor(x, name="x")
989 if x.dtype.base_dtype != base_type:
990 x = gen_math_ops.cast(x, base_type, name=name)
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/profiler/trace.py in wrapped(*args, **kwargs)
161 with Trace(trace_name, **trace_kwargs):
162 return func(*args, **kwargs)
--> 163 return func(*args, **kwargs)
164
165 return wrapped
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1564
1565 if ret is None:
-> 1566 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1567
1568 if ret is NotImplemented:
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
344 as_ref=False):
345 _ = as_ref
--> 346 return constant(v, dtype=dtype, name=name)
347
348
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in constant(value, dtype, shape, name)
269 ValueError: if called on a symbolic tensor.
270 """
--> 271 return _constant_impl(value, dtype, shape, name, verify_shape=False,
272 allow_broadcast=True)
273
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
281 with trace.Trace("tf.constant"):
282 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
--> 283 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
284
285 g = ops.get_default_graph()
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
306 def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):
307 """Creates a constant on the current device."""
--> 308 t = convert_to_eager_tensor(value, ctx, dtype)
309 if shape is None:
310 return t
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
104 dtype = dtypes.as_dtype(dtype).as_datatype_enum
105 ctx.ensure_initialized()
--> 106 return ops.EagerTensor(value, ctx.device_name, dtype)
107
108
ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.
```<|||||>i don't think that's gonna fix the OOM error right? <|||||>Yeah that won't fix that OOM error, but I wanted to see the full stack to help track down what we can do to adjust the base PR to get you unblocked. I'm not at my comp right now but will take a look tomorrow and see how we can adjust to make it work.<|||||>Hi all, I made a bunch of edits and hopefully things should work more smoothly now! Let me know if the problems remain.<|||||>Thanks @Rocketknight1, much appreciated!<|||||>Can you try it again @ichenjia?<|||||>Sorry, I was busy yesterday. here is what I did:
pip install --upgrade git+https://github.com/huggingface/transformers.git@fix_tflongformer_int_dtype
Then ran the same code and still got the error. Did I install from the right branch?
```
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/transformers/models/longformer/modeling_tf_longformer.py in call(self, input_ids, attention_mask, head_mask, token_type_ids, position_ids, global_attention_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, labels, training)
2389 global_attention_mask = tf.convert_to_tensor(global_attention_mask, dtype=tf.int64)
2390 else:
-> 2391 global_attention_mask = tf.cast(global_attention_mask, tf.int64)
2392
2393 if global_attention_mask is None and input_ids is not None:
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
204 """Call target, and fall back on dispatchers if there is a TypeError."""
205 try:
--> 206 return target(*args, **kwargs)
207 except (TypeError, ValueError):
208 # Note: convert_to_eager_tensor currently raises a ValueError, not a
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py in cast(x, dtype, name)
986 # allows some conversions that cast() can't do, e.g. casting numbers to
987 # strings.
--> 988 x = ops.convert_to_tensor(x, name="x")
989 if x.dtype.base_dtype != base_type:
990 x = gen_math_ops.cast(x, base_type, name=name)
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/profiler/trace.py in wrapped(*args, **kwargs)
161 with Trace(trace_name, **trace_kwargs):
162 return func(*args, **kwargs)
--> 163 return func(*args, **kwargs)
164
165 return wrapped
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1564
1565 if ret is None:
-> 1566 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1567
1568 if ret is NotImplemented:
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
344 as_ref=False):
345 _ = as_ref
--> 346 return constant(v, dtype=dtype, name=name)
347
348
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in constant(value, dtype, shape, name)
269 ValueError: if called on a symbolic tensor.
270 """
--> 271 return _constant_impl(value, dtype, shape, name, verify_shape=False,
272 allow_broadcast=True)
273
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
281 with trace.Trace("tf.constant"):
282 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
--> 283 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
284
285 g = ops.get_default_graph()
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
306 def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):
307 """Creates a constant on the current device."""
--> 308 t = convert_to_eager_tensor(value, ctx, dtype)
309 if shape is None:
310 return t
~/anaconda3/envs/tf_gpu/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
104 dtype = dtypes.as_dtype(dtype).as_datatype_enum
105 ctx.ensure_initialized()
--> 106 return ops.EagerTensor(value, ctx.device_name, dtype)
107
108
ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.
```
<|||||>Hi @ichenjia, the command you ran looks correct but the traceback you pasted refers to an old version of the code. (`global_attention_mask = tf.cast(global_attention_mask, tf.int64)` is not on line 2391 anymore)
Can you try `pip uninstall transformers` and then rerunning the command above, and then restarting any jupyter notebook servers you're running to make sure you're using the PR branch?<|||||>Hey all - I'm going to merge the PR with the fix so that it can be included in the next release of `transformers` this week. However, if you have further problems, please reopen the issue and let me know! |
transformers | 13,631 | closed | Updated tiny distilbert models | Update tiny distilbert models | 09-17-2021 19:35:23 | 09-17-2021 19:35:23 | |
transformers | 13,630 | closed | Fix GPT2Config parameters in GPT2ModelTester | Fixed a few keyword arg names of `GPT2Config` in `GPT2ModelTester` which have different names in GPT2 than in other model classes. | 09-17-2021 18:52:25 | 09-17-2021 18:52:25 | |
transformers | 13,629 | closed | ResourceExhaustedError: Failed to allocate request for 64.00MiB (67108864B) on device ordinal 0 | I'm training an XLM model from huggingface on a small dataset using Colab TPU. However, I get the following error:
<ResourceExhaustedError: Failed to allocate request for 64.00MiB (67108864B) on device ordinal 0>
I tried reducing the batch size and restarting the kernel and freeing memory but nothing is working.
Any help would be appreciated. | 09-17-2021 18:09:20 | 09-17-2021 18:09:20 | Can you give some additional info? like the number of layers and the batch size?<|||||>I'm following this notebook ->https://www.kaggle.com/dimasmunoz/text-classification-with-roberta-and-tpus
I've tried Roberta, BERT, DistilBert and all working okay on my dataset. Except for XLM !
`MODEL_NAME = 'xlm-mlm-en-2048' ## 12 layers, 16 heads
MAX_LEN = 256
ARTIFACTS_PATH = '../artifacts/'
BATCH_SIZE = 4 * strategy.num_replicas_in_sync
EPOCHS = 3`<|||||>Worked fine for the EnghlishGerman model "xlm-mlm-ende-1024" with 6 layers, 8 heads |
transformers | 13,628 | closed | Modified TF train_step | First attempt at modifying TF `train_step()` to neatly support passthrough from the model loss head.
This has the potential to break _absolutely everything_ Tensorflow in our entire codebase and it's very untested right now, so don't even _think_ about merging it until I've done some exhaustive testing. | 09-17-2021 18:04:52 | 09-17-2021 18:04:52 | |
transformers | 13,627 | closed | TF weights for xlm-roberta-base? | # 🚀 Feature request
Is there any plan to upload `tf_model.h5` model weights for `xlm-roberta-base` on https://huggingface.co/xlm-roberta-base/tree/main? So far there does not seem to be a way to load a `TFXLMRobertaModel`, as even the examples in the documentation load `roberta-base` (which is EN-only):
https://huggingface.co/xlm-roberta-base/tree/main
## Motivation
We are a tensorflow shop and would like to use the `TFXLMRobertaModel` with pretrained Tensorflow weights, but they do not exist.
| 09-17-2021 16:35:14 | 09-17-2021 16:35:14 | Hello @tgsmith61591, thank you for opening an issue! For legacy reasons, these models are here: https://huggingface.co/jplu/tf-xlm-roberta-base
We should merge them with the official checkpoints.<|||||>Awesome, thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,626 | closed | Huggingface master | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-17-2021 14:49:28 | 09-17-2021 14:49:28 | Please ignore this pull request, wrong pr |
transformers | 13,625 | closed | Removed console spam from misfiring warnings | A major source of UX annoyance is that when our code is used with Keras, warnings about inappropriate arguments fire every time the model is traced/compiled, which is at least once, and sometimes several times, per training run. I don't know why the warnings were written like this in the first place, but they're definitely not working now, so I'm going to remove them for now and do a deeper dive into the reason for the misbehaviour when I have more time. | 09-17-2021 12:52:09 | 09-17-2021 12:52:09 | The warning fires regardless of whether the options are set or not. I think it's only supposed to show when the user initializes a model, and then overrides those arguments when calling the model in graph mode.
However, because of the way `inputs_processing` handles things the non-overridden inputs are set to the config values and not to `None` when the arguments are passed to `boolean_processing`, so it just fires (often several times) every time we `fit()` any Transformers model. This means that users just get used to ignoring the warning the whole time anyway.
I'm 99% sure we can just choose solution 1) and totally remove the warning with no problems - I'm not really sure why we have this restriction on changing parameters in graph mode anyway in modern Tensorflow!<|||||>> The warning fires regardless of whether the options are set or not.
Yes, like I said, I think it's because the top model resolves the kwargs passed, then passes them to the base model which also sends the thing to `input_processing`.
But if you are sure they are unnecessary, let's remove them **and** the change in the kwargs.<|||||>I'm still a little bit afraid of changing the actual input processing until I do a deep dive into everything! I made a change to the PR: The warning still exists, but it confirms that the overridden value is actually different from the config value before it logs the warning. This eliminates the spam in all cases except the ones when it was actually supposed to display in the first place. |
transformers | 13,624 | closed | CPU memory (VRAM) not released after loading model in GPU | I using the latest PyTorch version with Cuda 11.
While I load GPT2 models using "Cuda" as a device. It works perfectly fine and is able to compute on GPU but at the same time, I see it also consuming 1.5 VRAM (CPU RAM) compare to the memory it is occupying in GPU RAM.
Is it possible that once the model is loaded onto the GPU RAM we can then release the CPU VRAM?
I tried gc.collect() but it didn't work. | 09-17-2021 11:00:23 | 09-17-2021 11:00:23 | Similar issue #13208.<|||||>Sorry to say.. None of the thread contains solution to this problem.. Do you guys have the solution.. So many people are asking for same problem but not a single solution is present.
Upload A GPT2-medium model to cpu first and then to GPU and you will see the VRAM is not released instead more VRAM is occupied
Arsalan Ahmed
________________________________
From: Li-Huai (Allan) Lin ***@***.***>
Sent: Friday, September 17, 2021 7:40:08 PM
To: huggingface/transformers ***@***.***>
Cc: Arsalan Ahmed ***@***.***>; Author ***@***.***>
Subject: Re: [huggingface/transformers] CPU memory (VRAM) not released after loading model in GPU (#13624)
Similar issue #13208<https://github.com/huggingface/transformers/issues/13208>.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/13624#issuecomment-921851272>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFHYIGBOC5T2MT2HHTBVUZTUCNHMRANCNFSM5EGYGYGQ>.
<|||||>Hello @arsalan993, could you please complete the issue template with environment information + reproducible code example so that we may take a look. Thank you.<|||||>@LysandreJik @qqaatw
- `transformers` version: transformers 4.10.2
- Platform: Ubuntu 18
- Python version: 3.8
- PyTorch version (GPU?): torch==1.9.0+cu111 torchvision==0.10.0+cu111
- Tensorflow version (GPU?): None
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
I have 23 GB VRAM, 1080Ti 12GB graphic card
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Nov_30_19:08:53_PST_2020
Cuda compilation tools, release 11.2, V11.2.67
Build cuda_11.2.r11.2/compiler.29373293_0
```
---------------
Now here is the breakdown
when I upload the model on CPU then this much CPU VRAM is consumes
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
torch_device = 'cpu'
model = GPT2LMHeadModel.from_pretrained("gpt2-medium").to(torch_device)
tokenizer = GPT2Tokenizer.from_pretrained("gpt2-medium")
```
CPU VRAM consumption : `2.00 GB`
---------------------------------
Now what happens which is basically an issue that I am unable to understand is if I upload the model on GPU then CPU VRAM consumption should be the least but that is not the case instead it increases
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
torch_device = 'cuda'
model = GPT2LMHeadModel.from_pretrained("gpt2-medium").to(torch_device)
tokenizer = GPT2Tokenizer.from_pretrained("gpt2-medium")
```
CPU VRAM consumption : `2.58 GB`
GPU RAM consumption : `2215MiB / 11178MiB`
I was expecting it to take a few hundred. MBs of VRAM but instead VRAM consumption increased.
--------------------
I have tried the following solution https://github.com/huggingface/transformers/pull/11736 but it didn't work and increased CPU VRAM consumption a bit more.
<|||||>Guys, can you please let me know maybe I am carrying some wrong concept regarding how backend architecture works. for example, if something is to be computed on GPU and a copy of it is stored on CPU VRAM.
please clear me out<|||||>@arsalan993 could you take a look at the comments in https://github.com/huggingface/transformers/issues/13208?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,623 | closed | AutoTokenizer - add `from_model_name` method | This PR adds a new `.from_model_name(...)` class method to `AutoTokenizer`.
In some example scripts we would like to train the tokenizer from scratch or create it from scratch - e.g. see: https://github.com/huggingface/transformers/pull/13620. Depending on which model is used the corresponding tokenizer class should be loaded. I think the cleanest way to pick the corresponding tokenizer class is to go over the **model names**, such as `"bert"`, `"gpt2"` etc... | 09-17-2021 10:45:43 | 09-17-2021 10:45:43 | > ```python
> tokenizer = AutoTokenizer.from_config(config)
> ```
Hmm, I think `from_model_name(....)` is the better option here because:
- The use cases of `AutoTokenizerr.from_config(...)` was mainly intended for example scripts where the tokenizer is trained/created in the script itself. In such a case, we can't leverage the model config since it's the creation of the tokenizer that defines `vocab_size`, `pad_token_id`, .... => e.g. I would like to enable to create a generic ASR tokenizer in such a script: https://github.com/huggingface/transformers/blob/da8beaaf762c0ea4eecb150039be63949fe5cf94/examples/research_projects/wav2vec2/run_common_voice.py#L358. Note that in this script the vocab file is created in the example.
- We can't do `AutoTokenizer.from_config(...)` as we also will have to pass a vocab_file, merges file, etc... -> so the function cannot have the same API as `AutoModel`
- I think we were planning on seperating the model config and the tokenizer config instead of having a single config => I think it can be confusing then to do `AutoTokenizer.from_config(...)` as one doesn't know whether a tokenizer config or model config should be passed here. Essentially I was aiming for such an API:
```python
config = AutoConfig.from_pretrained("facebook/wav2vec2-pretrained") # <- pretrained speech models have no information about vocab size, etc.... yet
# ... create tokenizer vocab files
tokenizer = AutoTokenizer.from_model_name(config.model_type, vocab_file="path/to/vocab/file")
config["vocab_size"] = len(tokenizer)
model = AutoModelForCTC(config) # <- now create model
```
I haven't thought too much about the API for the "NLP" tokenizers in this case, but couldn't one use `tokenizers` to train a tokenizer and then load it into a `transformers` tokenizer class via `AutoTokenizer.from_model_name("roberta", tokenizer_file="path/to/tokenizer")`? Or does the API doesn't make much sense here?
Keen to hear your thoughts here @LysandreJik @SaulLu <|||||>I understand your points. I agree that it is confusing to have `AutoTokenizer.from_config(...)` rely on the model configuration.
A last comment: `from_model_name` implies that a tokenizer is 1-to-1 linked with a model, but that's not the case. For example, what of BERTweet, which has a `BertweetTokenizer` but its model is a RoBERTa, or GPT-Neo/GPT-J/iBERT that use a `GPT2Tokenizer` under the hood? I'd opt for having `from_name`, or `from_type` if the goal is to always leverage `config.model_type` instead.<|||||>After talking to Lysandre offline a better solution might actually be to leverage the `AutoTokenizer.from_pretrained(...)` method directly.
The main reason for this PR is to make **fine-tuning of pretrained speech recognition models easier**. I want to be able to write a fine-tuning script just by using `Auto....` classes and no `Wav2Vec2...`.
To give a bit more context:
In speech recognition, a pretrained speech model (Wav2Vec2, HuBERT) has the following files defined after pretraining:
1) the **pretrained model**, which can be loaded with `AutoModelForCTC.from_pretrained(...)`
2) the **feature extractor**, which can be loaded with `AutoFeatureExtractor.from_pretrained(...)`
3) the **configuration file**, which needs to be adapted before being used to load the model
**Note** that no tokenizer is needed for pretraining and has to be defined on-the-fly for fine-tuning.
In fine-tuning the workflow is then as follows:
```python
# 1. load the config
config = AutoConfig.from_pretrained("facebook/wav2vec2-base")
# 2. create the vocabulary
characters = training_data["unique_chars"]
vocab_dict = {v, k for k,v in enumerate(characters)}
# 3. save the config locally to be able to create the tokenzier
with open("vocab.json", "w") as f:
f.write(json.dump(vocab_dict))
# 4. IMPORTANT: Now I want to instantiate a new tokenizer using the just created `"vocab.json"` and passing the model type.
# The tokenizer will be uploaded to the model repo after fine-tuning. that will be uploaded to the new model that uses `vocab.json`
# <- so possible solutions to this are
# 1. AutoTokenizer.from_name(config.model_type)
# 2. AutoTokenizer.from_config(config)
# 3. AutoTokenizer.from_pretrained("./", config=config)
# 4. AutoTokenizer.from_pretrained("./", model_type=config.model_type)
```
Case # 1. was the proposal of this PR.
I'm not a fan of # 2. as it's not clear to me whether `config` is a tokenizer config or a model config
Case # 3. actually already works out of the box and wouldn't require any changes. Given that the API is already there I think it's fine to use it. However, here I'm also a bit worried that people will assume that the `config` will actually overwrite also attributes in the tokenizer config. E.g. I could see people setting the `pad_token_id`, `vocab_size` in the config and then assume the tokenizer will have the same `vocab_size` and `pad_token_id` after calling `AutoTokenizer.from_pretrained("./", config=config)`
Case # 4. would require some minimal changes to `AutoTokenizer` (just adding a kwargs.pop("model_type")) essentially
=> Having talked to @LysandreJik a bit more about it, I think it does indeed not necessary make sense to create a whole new API of the `AutoTokenizer` and that it should be enough to just use the already existing `.from_pretrained(...)`.
Also it seems like in the case of training a tokenizer from scratch, one would simply always make use of `PreTrainedTokenizer(...)` instead of using the `AutoTokenizer` class.
@SaulLu - I very much agree that we should not assume that the user knows how to spell the model name. Rather, it should be extracted from the configuration file.
I would be in favor of Case #4 as I think it's the cleanest approach, but given the API for Case #3 already exist I guess I would also be fine with using this one. Still think it's quite confusing though to pass the model config to the AutoTokenizer taking into consideration that we have a tokenizer config with `tokenizer_config.json`... Think `AutoTokenizer.from_pretrained(..., config=config)` is also not really used, no?
What do you guys think?
(Sorry for the long discussions here!) <|||||>Thanks for the write-up @patrickvonplaten. I think case 4 is the cleanest approach too, even if I still dislike `model_type` - how about `tokenizer_type`?
This is a nitpick though, happy to go with either.<|||||>I don't mind adding the API for case 4, but I share Lysandre's view on the name of the argument. `tokenizer_type` would be better indeed.<|||||>Superseded by #13668 |
transformers | 13,622 | closed | Add `obj-det` pipeline support for `LayoutLMV2` | # What does this PR do?
Note: I'm using terms `document-understanding` & `layout-detection` interchangeably and imo, term `layout-detection` sounds more accurate.
As we have discussed in huggingface/hub-docs#21, reusing `object-detection` pipeline for `layout-detection` architectures (specifically, `LayoutLMv2ForTokenClassification).
An important detail in reusing so is that:
* `LayoutLMv2ForTokenClassification` needs `LayoutLMv2Processor` to preprocess input image
* Since processor is just a combination of (tokenizer + feature_extractor), `ObjectDetectionPipeline.preprocess` does exactly what `LayoutLMv2Processor` does when the selected model has architecture `...ForTokenClassification`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@Narsil @NielsRogge | 09-17-2021 10:35:11 | 09-17-2021 10:35:11 | Currently, there are 2 layout models: [layoutlm](https://github.com/huggingface/transformers/tree/master/src/transformers/models/layoutlm) & [layoutlmv2](https://github.com/huggingface/transformers/tree/master/src/transformers/models/layoutlmv2).
Unlike layoutlmv2, layoutlm lacks its feature_extractor (which should be quite similar to layoutlmv2's one) -> therefore, layoutlm (layoutlmfortokenclassification) can;t be used in ObjectDetectionPipeline currently.
I propose to take care of it (add feature_extractor to layoutlm) in a separate PR
wdyt @Narsil @NielsRogge <|||||>@mishig25 you can directly use `LayoutLMv2FeatureExtractor` for LayoutLMv1. You should just not use the `image` key it creates.<|||||>@NielsRogge sounds great.
Becuase of the `transformers` design, I will copy all of the code from `models/layoutlmv2/feature_extraction_layoutlmv2.py` -> `models/layoutlm/feature_extraction_layoutlm.py`
one detail I'd like to know is:
1. [LayoutLMForTokenClassification](https://github.com/huggingface/transformers/blob/eae7a96b7d5810ee6723e41f9b316cc51672fbb1/src/transformers/models/layoutlm/modeling_layoutlm.py#L1082-L1087) exists for the Pytorch version of the architecture only (not for Tensorflow version of the model)
2. LayoutLMv2FeatureExtractor has torch dependency as [in here](https://github.com/huggingface/transformers/blob/f3248a67ca8ed8c1075f7bdcb7ab896a5c59f4d0/src/transformers/models/layoutlmv2/feature_extraction_layoutlmv2.py#L30-L33)
3. Because 1 & 2 -> therefore, I should not add any tensorflow related code to `layoutlm/feature_extraction_layoutlm.py`. Is it correct?<|||||>Ok, perhaps it might be better to do that in a separate PR. Because then you also need to add the tokenizer, etc.
Also, the feature extractor shouldn't have a PyTorch dependency (only depends on PIL), unless you use it for postprocessing.<|||||>@Narsil since I've made [this change](https://github.com/huggingface/transformers/pull/13622#discussion_r715404373),
[ci/circleci: run_tests_pipelines_torch ](https://app.circleci.com/pipelines/github/huggingface/transformers/28325/workflows/6bb6f52b-bbc1-4f38-936f-a1cb31490c1c/jobs/278922) is failing for a reason: `requires_backends(self, "detectron2")`
However, I've tried adding `@require_detectron2` decorator both to `class ObjectDetectionPipelineTests` & method `run_pipeline_test` and wasn't able to make the test pass.
What am I missing?<|||||>@mishig25 It seems it's unrelated tests. Did you rebase ?
It seems like other generic tests are failing because Layout is declaring himself as ForQuestionAnswering etc... and those tests are ran without detectron2...
Feel free to ignore, I'll take a look<|||||>@Narsil I did rebase. It should be unrelated but the tests started failing after commit https://github.com/huggingface/transformers/pull/13622/commits/df86d519d89e69ec2b823cbb36bac58e6bfaac18
Thanks a lot for looking at this!<|||||>@LysandreJik could you please check this https://github.com/huggingface/transformers/pull/13622/commits/c93385e0af44b41373100a81ba44628e1bc60379 and https://github.com/huggingface/transformers/pull/13622/commits/defd5747afdc906d7acc86225d4fe41ca8336398 commits where Nicolas is disabling some tests for a reason:
> Basically Layout displays itself as a valid model for many pipelines but it isn't (like other vision models it's fine) I just disabled those tests as they're not supposed to work
(ForQuestionAnswering, ForTextClassification and so on)<|||||>@mishig25 Any blockers left for this ?<|||||>~Feel free to merge when ready @mishig25~ @NielsRogge corrected me that some conversation still needs to happen before this is merged.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,621 | closed | How to freeze a few layers in t5 model during fine-tuning | null | 09-17-2021 09:41:17 | 09-17-2021 09:41:17 | In PyTorch, this is very easy, like so:
```
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("t5-base")
for name, param in model.named_parameters():
if name == "...":
param.requires_grad = False
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,620 | closed | [ASR] Add official ASR CTC example to `examples/pytorch/speech-recognition` | This PR adds a generic speech recognition for CTC example. It has been tested for single GPU and distributed training on Common Voice and is being tested on Librispeech currently.
Once `datasets` has https://github.com/huggingface/datasets/pull/2324/files merged and made a new release I will slightly adapt the script to leverage the new audio feature.
A couple of example runs with this script:
- https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tr-demo
- https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tr-demo-dist
This example folder should have two additional scripts: 1 for Seq2Seq ASR + 1 for CTC + LM decoding which are left for future work | 09-17-2021 09:33:05 | 09-17-2021 09:33:05 | |
transformers | 13,619 | closed | [Trainer] Add nan/inf logging filter | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Some losses sometimes produce `inf` losses which however doesn't necessarily mean that the training went bad. When using the CTC loss for speech recognition - see: https://github.com/huggingface/transformers/pull/13620 this is often the case. The problem is that as soon as one single loss step is `inf` or `nan` the rest of the training logs will display `inf` or `nan`. In this PR a flag is added that allows the user to filter out `nan` and `inf` values for training. It defaults to `False` and will be set to `True` in all CTC training scripts.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-17-2021 09:23:19 | 09-17-2021 09:23:19 | Happy to let it default to True! |
transformers | 13,618 | closed | How to change type_vocab_size? | tensorflow==2.3.1
transformers==4.2.1
My code is as follow:
```
transformer = TFAutoModel.from_pretrained('hfl/chinese-roberta-wwm-ext', from_pt=False, type_vocab_size=3)
```
I got error:
```
ValueError: cannot reshape array of size 1536 into shape (3,768)
``` | 09-17-2021 08:00:01 | 09-17-2021 08:00:01 | The embedding layer of the exisiting pretrained model you are loading has shape (2, 768). If you specify another shape, like (3, 768), then it cannot reshape the existing embedding layer to the new shape you are asking.
You can however circumvent this using the new `ignore_mismatched_sizes` argument:
```
from transformers import TFAutoModel
transformer = TFAutoModel.from_pretrained('hfl/chinese-roberta-wwm-ext', from_pt=False, type_vocab_size=3,
ignore_mismatched_sizes=True)
```
This will print the following warning:
```
All model checkpoint layers were used when initializing TFBertModel.
Some weights of TFBertModel were not initialized from the model checkpoint at hfl/chinese-roberta-wwm-ext and are newly initialized because the shapes did not match:
- bert/embeddings/token_type_embeddings/embeddings:0: found shape (2, 768) in the checkpoint and (3, 768) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,617 | closed | Correct GPT-J voab_size | # What does this PR do?
Corrects GPT-J `vocab_size`. GPT-J has 50400 `vocab_size` but the tokenizer’s vocab size is 50257. And the `run_clm.py` script always resizes the token embeddings using `len(tokenizer)`, so when the model is fine-tuned with that script, it changes the embedding size, which results in shape mismatch as described in #13499.
These extra tokens are for the sake of efficiency on TPU and are not used by the model.
This however would break all existing downloaded models, since those checkpoints will have 50400 `vocab_size`. The solution would be to manually change the `vocab_size` in the `config.json` file to 50257.
Fixes #13499, Fixes #13581 | 09-17-2021 05:23:35 | 09-17-2021 05:23:35 | Should we update the official GPT-J config then as well? <|||||>Yes, will update the official configs and weights as well, if the PR is approved.<|||||>T5 actually has the same issue where there is a mismatch between model's vocab size and tokenizer's vocab size which led to quite some confusion so very much in favor of changing it here (especially since there hasn't been an official release yet).
Note that the official GPT-J repo has two branches so we should make sure to update both<|||||>Yes, the extra tokens are there for efficiency reasons on TPU, since the original implementation uses model parallelism.
I agree that we could modify the script and maybe add a flag, so embeddings are only resized if the user explicitly passes the flag.
But if a user wants to add new tokens to the tokenizer and resize embeddings, it will reduce the `vocab_size`, which is a bit confusing IMO (that is the case with T5)<|||||>I understand the change and think that this will indeed lead to painful errors. If modifying the model size is not an option for optimization's sake and not resizing the model leads to painful errors, how about resizing the tokenizer by adding new (unused) tokens to it?
Resize the tokenizer to 50400 so that it matches the model's vocab size, and put unused values in the unused range. WDYT?<|||||>I think @LysandreJik 's solution is the best!<|||||>Think I'm fine with adding unused tokens to the tokenizer.
I'm a bit worried about the community being confused as GPT-J uses the official GPT2 tokenzier which has 50257 tokens (with the last token being the EOS token). So reading that GPT-J uses the official GPT2 tokenizer: https://github.com/kingoflolz/mesh-transformer-jax#model-details with 50257 and then seeing a different vocab size in the hub might be a bit confusing for people (maybe with a good warning it's fine?)
Also think though that @LysandreJik is the best solution here<|||||>Good call! Maybe a mention on the model card mentioning why the model has a vocab size of 50400 instead of the 50257 tokens it has been trained with is a solution? <|||||>cc @StellaAthena <|||||>Sounds good to me @LysandreJik!
> Maybe a mention on the model card mentioning why the model has a vocab size of 50400 instead of the 50257 tokens it has been trained with is a solution?
I think the model card already mentions this and maybe we could also put this in docs as well.<|||||>[added extra tokens](https://huggingface.co/EleutherAI/gpt-j-6B/commit/d3109d20aec493086dfca34b79bc233c48419949) and updated the tokenizer, #13696 adds a note about this in the docs.
Will close this PR now. |
transformers | 13,616 | closed | DeepSpeed: There is no obvious benefit from increasing the number of GPU nodes | I tested the performance of DeepSpeed on a GPU cluster, all GPUs are V100 16GB.
"2x4" means 2 node with 4 gpu each
my setup:
HF launch scripts:
```bash
deepspeed \
--hostfile=hostfile.txt \
run_clm.py \
--deepspeed ds_config.json \
--model_name_or_path /huggingface_with_deepspeed/gpt2-large/ \
--preprocessing_num_workers $(nproc) \
--do_train \
--train_file /data/train_mini.txt \
--fp16 true \
--output_dir output \
--overwrite_output_dir true \
--per_device_train_batch_size 1
```
DeepSpeed config:
```json
{
"train_micro_batch_size_per_gpu": "auto",
"fp16": {
"enabled": true,
"loss_scale": 0
},
"flops_profiler": {
"enabled": true,
"profile_step": 10,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": "./tmp.log"
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param":{
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true,
"grad_hooks": true,
"round_robin_gradients": false
},
"gradient_clipping": 1.0,
"wall_clock_breakdown": false,
"sparse_attention": {
"mode": "fixed",
"block": 16,
"different_layout_per_head": true,
"num_local_blocks": 4,
"num_global_blocks": 1,
"attention": "bidirectional",
"horizontal_global_attention": false,
"num_different_global_patterns": 4
}
}
```
performance result:
|GPT2-large(770M)<br>BATCH_SIZE=1<br>1x4 | HF with DS(ZeRO-1) | HF with DS(ZeRO-2) | HF with DS(ZeRO-2-offload) | HF with DS(ZeRO-3) | HF with DS(ZeRO-3-offload)|
|-- | -- | -- | -- | -- | --|
|CUDA MEM(MB)|14837 | 15983 | 14483 | 16155 | 13145|
|Forward time(ms)|83.59 | 83.72 | 83.47 | 529.49 | 538.30|
|Backward time(ms)|437.23 | 283.19 | 658.34 | 1016.79 | 976.68|
|Step time(ms)|2370.66 | 2372.91 | 4455.99 | 154.37 | 2163.18|
|Total time(ms)|2892 | 2740 | 5199 | 1770 | 3740|
|Throughput(samples/s)|1.38 | 1.46 | 0.77 | 2.26 | 1.06|
|GPT2-large(770M)<br>BATCH_SIZE=1<br>2x4 | HF with DS(ZeRO-1) | HF with DS(ZeRO-2) | HF with DS(ZeRO-2-offload) | HF with DS(ZeRO-3) | HF with DS(ZeRO-3-offload)|
|-- | -- | -- | -- | -- | --|
|CUDA MEM(MB)|14979 | 14903 | 14457 | 14521 | 13147|
|Forward time(ms)|83.89 | 83.90 | 83.61 | 2685.72 | 2701.03|
|Backward time(ms)|5370.95 | 2783.34 | 3180.92 | 4502.87 | 4618.15|
|Step time(ms)|2370.66 | 2445.77 | 2338.42 | 3759.82 | 131.51 | 1335.59|
|Total time(ms)|2892 | 7901 | 5206 | 7063 | 7320 | 8655|
|Throughput(samples/s)|1.01 | 1.53 | 1.13 | 1.09 | 0.92|
|GPT2-large(770M)<br>BATCH_SIZE=1<br>4x4 | HF with DS(ZeRO-1) | HF with DS(ZeRO-2) | HF with DS(ZeRO-2-offload) | HF with DS(ZeRO-3) | HF with DS(ZeRO-3-offload)|
|-- | -- | -- | -- | -- | --|
|CUDA MEM(MB)|14793 | 14401 | 14247 | 13763 | 12981|
|Forward time(ms)|83.36 | 83.43 | 83.71 | 2339.03 | 2414.61|
|Backward time(ms)|4980.09 | 2842.94 | 3194.67 | 4201.04 | 4348.77|
|Step time(ms)|2632.80 | 2777.44 | 3384.50 | 177.98 | 870.95|
|Total time(ms)|2892 | 7701 | 5707 | 6665 | 6722 | 7634|
|Throughput(samples/s)|2.07 | 2.80 | 2.4 | 2.38 | 2.09|
To my knowledge, there are something strange:
- GPU memory has not decreased significantly with the increase in the number of nodes ?
- Offload seem not reduce memory significantly?
- Refer to paper: [ZeRO-Offload](https://arxiv.org/abs/2101.06840), "It can train models with over 13 billion parameters on a single GPU with ZeRO2-Offload", but the reality is that I can’t even train the GPT2-xl model on 16 GPUs, which is only 1.5B parameters
Are these results in line with expectations? | 09-17-2021 03:30:51 | 09-17-2021 03:30:51 | Thank you for your detailed report, @dancingpipi
I will try to study your report later today, but until then please have a look at:
https://deepspeed.readthedocs.io/en/stable/memory.html
which should answer at least part of this query.<|||||>Your reported gpu memory usage numbers indeed don't make sense, the differences typically are quite dramatic. You can refer to the link above to see how it's supposed to be. How are you measuring the cuda memory? What do these tables report - an average? a total? on which gpu are you measuring this, which tool?
Are you sure your hostfile changes between the runs?
Did you know the Trainer has the memory measurements already built in?
Just add `--skip_memory_metrics 0` to the script's cmd - note: it reports just the first gpu's memory.<|||||>thanks for your quick reply!
> How are you measuring the cuda memory?
I measure the cuda memory with "nvidia-smi".
> What do these tables report - an average? a total?
An average
> on which gpu are you measuring this?
On first GPU
> which tool?
DeepSpeed print some metrics on the screen, look like this:

> Did you know the Trainer has the memory measurements already built in?
No, I'll try the '--skip_memory_metrics 0' argument<|||||>Let's ignore other metrics as they are irrelevant to this inquiry - let's stick to just memory measurement.
`nvidia-smi` may not always report the real memory usage for a given application, though it's great to see the overall memory availability. `nvml` and tools based on it are easier to use since they are programmable.
In HF Transformers we use pytorch's internal memory management tools which are very precise and account only pytorch's allocations:
https://github.com/huggingface/transformers/blob/b518aaf193938247f698a7c4522afe42b025225a/src/transformers/trainer_utils.py#L276
----------------
One other concern I have is that Deepspeed doesn't validate its config file, so if you make a typo it just ignores the bad entry and uses its defaults. So looking at the log files helps to see that your config actually made through.
Also you have shared just one of the config files, but your table calls for many different configs so I can't see what you're doing.
Let's narrow the experiment a bit and just compare 2 configs on 1x 4 gpus, let's pick zero2+cpu_offload and zero3+cpu_offload.
So let's see the 2 configs and the 2 log files. For the log files please attach them to your comment via attachment, don't paste them, as they are huge, the 2 configs you can paste.
Thanks.
So it'd probably help to look at a few log files. For example, one log file with zero3 and another with zero2<|||||>**zero2_config.json**
```
{
"train_micro_batch_size_per_gpu": "auto",
"fp16": {
"enabled": true,
"loss_scale": 0
},
"flops_profiler": {
"enabled": true,
"profile_step": 10,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": "./tmp.log"
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true,
"grad_hooks": true,
"round_robin_gradients": false
},
"gradient_clipping": 1.0,
"wall_clock_breakdown": false,
"sparse_attention": {
"mode": "fixed",
"block": 16,
"different_layout_per_head": true,
"num_local_blocks": 4,
"num_global_blocks": 1,
"attention": "bidirectional",
"horizontal_global_attention": false,
"num_different_global_patterns": 4
}
}
```
**zero3_config.json**
```
"train_micro_batch_size_per_gpu": "auto",
"fp16": {
"enabled": true,
"loss_scale": 0
},
"flops_profiler": {
"enabled": true,
"profile_step": 10,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": "./tmp.log"
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param":{
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true,
"grad_hooks": true,
"round_robin_gradients": false
},
"gradient_clipping": 1.0,
"wall_clock_breakdown": false,
"sparse_attention": {
"mode": "fixed",
"block": 16,
"different_layout_per_head": true,
"num_local_blocks": 4,
"num_global_blocks": 1,
"attention": "bidirectional",
"horizontal_global_attention": false,
"num_different_global_patterns": 4
}
}
```
[zero2.log](https://github.com/huggingface/transformers/files/7224608/zero2.log)
[zero3.log](https://github.com/huggingface/transformers/files/7224609/zero3.log)
@stas00 I'm sorry for not having timely feedback.
Above are config and log for zero2 and zero3.
I didn’t run the entire training because it would take a lot of time.<|||||>Thank you for sharing the configs and matching logs, @dancingpipi. The logs look matching the config, can't see any issue there.
> I didn’t run the entire training because it would take a lot of time.
but we need the metric reports - could you please run like 5 iterations of each of these 2 configs and include the full log with the final metric report?
And please include `--skip_memory_metrics 0` argument so we get the memory usage reported.
Additionally, I meant to ask why were you adding the sparse attention config? `transformers` isn't doing anything with it at the moment - perhaps it configures something there that adds up memory usage?<|||||>@stas00 Sorry for late reply.
I have run an entire training on a small dataset.
The logs are here.
[zero-2.log](https://github.com/huggingface/transformers/files/7282786/zero-2.log)
[zero-3.log](https://github.com/huggingface/transformers/files/7282868/zero-3.log)
<|||||>Thank you for sharing the logs, @dancingpipi, so let's have a look:
```
# zero 2
train_mem_cpu_alloc_delta = 4429MB
train_mem_cpu_peaked_delta = 2950MB
train_mem_gpu_alloc_delta = 1635MB
train_mem_gpu_peaked_delta = 8862MB
# zero 3
train_mem_cpu_alloc_delta = 3428MB
train_mem_cpu_peaked_delta = 1442MB
train_mem_gpu_alloc_delta = -399MB
train_mem_gpu_peaked_delta = 10352MB
```
You can clearly see that memory usage is far from being the same.
The report with negative numbers is confusing under ZeRO3, so I made an improved reporting here: https://github.com/huggingface/transformers/pull/13915 so that it reports the initial memory at the start and now the negative delta adds up to the pre-allocated memory on the gpu. You can retest with that PR to get a more exact picture.
So let's step aside for a moment from your configs and use the default configs I pre-made for z2 and z3 and a small model:
ZeRO2
```
#MODEL="sshleifer/tiny-gpt2"
MODEL="gpt2"
PYTHONPATH=src deepspeed --num_gpus 2 \
examples/pytorch/language-modeling/run_clm.py \
--deepspeed tests/deepspeed/ds_config_zero2.json \
--model_name_or_path $MODEL \
--do_train \
--train_file tests/fixtures/sample_text.txt \
--fp16 true \
--output_dir output \
--overwrite_output_dir 1 \
--per_device_train_batch_size 1 \
--report_to none \
--skip_memory_metrics 0
***** train metrics *****
before_init_mem_cpu = 5759MB
train_mem_cpu_alloc_delta = 1952MB
train_mem_cpu_peaked_delta = 224MB
before_init_mem_gpu = 0MB
train_mem_gpu_alloc_delta = 324MB
train_mem_gpu_peaked_delta = 2611MB
```
ZeRO3
```
MODEL="gpt2"
PYTHONPATH=src deepspeed --num_gpus 2 \
examples/pytorch/language-modeling/run_clm.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path $MODEL \
--do_train \
--train_file tests/fixtures/sample_text.txt \
--fp16 true \
--output_dir output \
--overwrite_output_dir 1 \
--per_device_train_batch_size 1 \
--report_to none \
--skip_memory_metrics 0
***** train metrics *****
before_init_mem_cpu = 5861MB
train_mem_cpu_alloc_delta = 2158MB
train_mem_cpu_peaked_delta = 224MB
before_init_mem_gpu = 193MB
train_mem_gpu_alloc_delta = -108MB
train_mem_gpu_peaked_delta = 2015MB
```
I rearranged the reports in groups so it's easy to make sense of the numbers.
It's easy to see that the memory usage is far from the same between the 2 cases.
Does this give any help? Perhaps you can build a bridge between what I shared and what you use and continue monitoring memory usage?
HF Trainer metrics use pytorch reporting - so you get the exact allocations for GPU memory usage.<|||||>I also run z3 w/ gpt2 on 1 vs 2 gpus and indeed I see a very similar memory usage. The answer is `cpu_offload`. Turn it off and you will see a dramatic change in memory usage between 1 vs many gpus.
when I turn offload off I get total gpu memory usage: i.e.: `before_init_mem_gpu+train_mem_gpu_alloc_delta+train_mem_gpu_peaked_delta` to be:
- 1 gpu: 3917MB
- 2 gpus: 2976MB (on the first gpu)
this is 25% difference on a very small model.
(same cmd as above)
(and please remember to use my PR if it's not yet merged when you try to replicate this so that `before_init_mem_gpu` metric is included)
Have a look at https://deepspeed.readthedocs.io/en/stable/memory.html
and you can quickly evaluate the needed memory with different config options turned on and the number of gpus you have. This will help you a lot to appreciate the impact of each deepspeed setting.<|||||>@stas00 Thanks for your reply!
From your experiment, I have summarized a few points:
- Use 'skip_memory_metrics 0' to report memory usage instead of 'nvidia-smi'
- As the number of GPUs increases, there is similar memory usage if 'cpu_offload' turn on.
- Memory usage will change between 1 vs many gpus if 'cpu_offload' turn off.
- We can evaluate the needed memory through https://deepspeed.readthedocs.io/en/stable/memory.html
I still encounter some problems:
- 'https://deepspeed.readthedocs.io/en/stable/memory.html' only report the memory needed for params, optim and gradients, This is of little value to me, because I only care about the total GPU memory during runtime
- To the paper [ZeRO-Offload](https://arxiv.org/abs/2101.06840), offload can save a lot of gpu memory, but my experiment not show that.
- Is it possible to run the gpt2-xl model (1.5B) on multiple V100 16GB GPUs? If so, how can I set up my configuration file? If not, can you tell me the reason, because I think it is feasible after reading the paper
Sincere thanks<|||||>> * 'https://deepspeed.readthedocs.io/en/stable/memory.html' only report the memory needed for params, optim and gradients, This is of little value to me, because I only care about the total GPU memory during runtime
I hear you. I was hoping to find time to write a memory usage breakdown doc, but I'm not sure when this will happen. I have most of the information already, other than figuring out the activation memory which will require some analyzing of the model.
Would you be interesting in helping out to figure out the memory needs math for the activation memory?
> * To the paper [ZeRO-Offload](https://arxiv.org/abs/2101.06840), offload can save a lot of gpu memory, but my experiment not show that.
Best place is to ask at https://github.com/microsoft/DeepSpeed, since they wrote the paper. Probably it's the best to tag Samyam to your post there.
> * Is it possible to run the gpt2-xl model (1.5B) on multiple V100 16GB GPUs? If so, how can I set up my configuration file? If not, can you tell me the reason, because I think it is feasible after reading the paper
Should we attack this specific request in a separate issue? That's open a new issue here with the cmd and config you tried and the error (I assume OOM), your hardware spec (how many v100 16gb gpus, how much CPU memory). Ideally using HF Transformers examples so that I could reproduce it. And then we can look at finetuning it so that you can accomplish what you need. And of course please tag me.
Apologies for taking so long to follow up, been "putting out a lot of fires" recently.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,615 | closed | Why we use truncated normal initializer instead of the default glorot_uniform in the tfbert? | I'm confused for the initializer in the bert implementation, each Dense uses a `truncated normal initializer` with std=0.02.
Why bert do that, is there any explaination?
a piece of code exmaple:
```python
self.query = tf.keras.layers.Dense(
units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="query"
)
self.key = tf.keras.layers.Dense(
units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="key"
)
self.value = tf.keras.layers.Dense(
units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="value"
)
def get_initializer(initializer_range: float = 0.02) -> tf.initializers.TruncatedNormal:
"""
Creates a :obj:`tf.initializers.TruncatedNormal` with the given range.
Args:
initializer_range (`float`, defaults to 0.02): Standard deviation of the initializer range.
Returns:
:obj:`tf.initializers.TruncatedNormal`: The truncated normal initializer.
"""
return tf.keras.initializers.TruncatedNormal(stddev=initializer_range)
``` | 09-17-2021 03:08:37 | 09-17-2021 03:08:37 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,614 | closed | Use `config_dict_or_path` for deepspeed.zero.Init | # What does this PR do?
While #13587 fixes one DeepSpeed warning, it missed a similar one. This applies the same change of `config` -> `config_dict_or_path` in another location.
## Who can review?
@sgugger (same reviewer as #13587) | 09-17-2021 02:10:12 | 09-17-2021 02:10:12 | I'll let @stas00 merge this after he double-checks it's good :-)<|||||>@stas00, you're welcome! Thanks for the review! |
transformers | 13,613 | closed | Fixes issues with backward pass in LED/Longformer Self-attention | ## Description
This PR fixes the computational graph created when computing the global attention scores in LED/Longformer Self-attention. The current implementation breaks the computational graph preventing the model from running the backward pass correctly in some cases. As explained in the dedicated issue, this problem arises in the current version of PyTorch (PyTorch 1.9) but not in one of the previous ones (PyTorch 1.7.1).
This PR simply clones the appropriate tensors in order to avoid the issue. `clone()` is a differentiable operation so there are no changes to the actual model behaviour.
The issue is carefully reproduced in the following Google Colab: https://colab.research.google.com/drive/13rKxs6Ype0kDEBlnywsGynE2zpzv2CR-?usp=sharing
Fixes #12613
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ibeltagy @patrickvonplaten
| 09-16-2021 23:59:58 | 09-16-2021 23:59:58 | |
transformers | 13,612 | closed | ProphetNet generation inconsistent with batch size? | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
It says to ping @patrickvonplaten about ProphetNet
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration
checkpoint = 'microsoft/prophetnet-large-uncased-squad-qg'
model = ProphetNetForConditionalGeneration.from_pretrained(checkpoint)
tokenizer = ProphetNetTokenizer.from_pretrained(checkpoint)
fact1 = "Bill Gates [SEP] Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975."
fact2 = "42 [SEP] 42 is the answer to life, the universe and everything."
fact3 = "attention [SEP] A transformer is a deep learning model that adopts the mechanism of attention, differentially weighing the significance of each part of the input data."
def try_variable_batches(facts):
inputs = tokenizer(facts, padding=True, truncation=True, return_tensors="pt")
question_ids = model.generate(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], num_beams=5)
return tokenizer.batch_decode(question_ids, skip_special_tokens=True)
print(try_variable_batches([fact1, fact2, fact3]))
'''
['who founded microsoft?',
'what is the answer to life, the universe and everything?',
'what mechanism does a transformer adopt?']
'''
print(try_variable_batches([fact1]))
# ['along with paul allen, who founded microsoft?']
print(try_variable_batches([fact2]))
# ['what is the answer to life, the universe and everything?']
print(try_variable_batches([fact3]))
# ['a transformer adopts the mechanism of what?']
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I'm expecting that batching shouldn't change the results. I tried with another QG model and got consistent results with different batch sizes. If it helps at all, based on other experiments the performance of the model seems best with a batch size of 1. | 09-16-2021 23:14:08 | 09-16-2021 23:14:08 | Hey @deklanw,
Thanks a lot for the very clean issue report! This is indeed a problem and we should try to fix it.
At the moment, I'm a bit overwhelmed with issues, so would you maybe be interested in working together to solve the issue? I suspect that the problem actually lies in the froward pass of Prophetnet, so here is what we could/should do:
1. Create a test that verifies that the encoder attention mask works correctly in the forward pass:
- Pass a batch_size=1, no padded input to `model(input_ids=input_ids_single, attention_mask=None, decoder_input_ids=torch.ones((1,)))` vs `model(input_ids=input_ids_batched, attention_mask=attention_mask, decoder_input_ids=torch.ones((input_ids_batch.shape[0], 1))`
Here we should first check if the encoder outputs are exactly the same and then whether the decoder outputs are the same.
Would you be interested in opening a PR where such a test is added and that would help us see where the problem could be? :-)<|||||>If you don't find any time - don't worry, I'll try to allocate some in ~1,2 weeks for this<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,611 | closed | fix some docstring in encoder-decoder models | # What does this PR do?
Fix a few places in some encoder-decoder docstrings. For example,
`:class:~transformers.EncoderDecoder is a generic ...` -> `:class:~transformers.SpeechEncoderDecoderModel is a generic ... `
in `modeling_speech_encoder_decoder.py`.
## Who can review?
@patrickvonplaten
| 09-16-2021 19:09:22 | 09-16-2021 19:09:22 | |
transformers | 13,610 | closed | How to use model.save() in tf2 when using TFBertModel | tensorflow==2.3.1
transformers==4.2.1
My model define as:
```
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import *
from transformers import TFAutoModel
input_ids = Input(shape=(3000), name='INPUT_input_ids', dtype=tf.int32)
input_mask = Input(shape=(3000), name='INPUT_input_mask', dtype=tf.int32)
segment_ids = Input(shape=(3000), name='INPUT_segment_ids', dtype=tf.int32)
passage_mask = Input(shape=(10), name='INPUT_passage_mask', dtype=tf.int32)
input_ids_reshape = K.reshape(input_ids,(-1, 300))
input_mask_reshape = K.reshape(input_mask,(-1, 300))
segment_ids_reshape = K.reshape(segment_ids,(-1, 300))
transformer = TFAutoModel.from_pretrained('hfl/chinese-roberta-wwm-ext', from_pt=False)
transformer_output = transformer([input_ids_reshape, input_mask_reshape, segment_ids_reshape])[0]
......
model = Model(
inputs = [input_ids, input_mask, segment_ids, passage_mask],
outputs = [start_prob, end_prob]
)
```
I try to save model in this way:
```
model.save(path)
```
but I got error
```
/lib/python3.6/site-packages/transformers/modeling_tf_utils.py in input_processing(func, config, input_ids, **kwargs)
364 output[tensor_name] = input
365 else:
--> 366 output[parameter_names[i]] = input
367 elif isinstance(input, allowed_types) or input is None:
368 output[parameter_names[i]] = input
IndexError: list index out of range
```
model.predict() and model.save_weights() is working.
How to use model.save() with huggingface-transformers? OR How to write model with huggingface-transformers? I just want to use transformers as a keras layer in my model. | 09-16-2021 16:25:55 | 09-16-2021 16:25:55 | Hi @leisurehippo , not all of our models work well with `model.save()`. If you want to get a SavedModel output, please try the `model.save_pretrained()` method with `saved_model=True`. You can see more about that method [here](https://huggingface.co/transformers/main_classes/model.html#transformers.TFPreTrainedModel.save_pretrained).
If you still encounter the same problem when using `save_pretrained`, let me know and I'll try to reproduce the issue.<|||||>> Hi @leisurehippo , not all of our models work well with `model.save()`. If you want to get a SavedModel output, please try the `model.save_pretrained()` method with `saved_model=True`. You can see more about that method [here](https://huggingface.co/transformers/main_classes/model.html#transformers.TFPreTrainedModel.save_pretrained).
>
> If you still encounter the same problem when using `save_pretrained`, let me know and I'll try to reproduce the issue.
I try to save in this way:
```
model.save_pretrained(path,saved_model=True)
```
But I got error
```
AttributeError: 'Functional' object has no attribute 'save_pretrained'
```<|||||>> Hi @leisurehippo , not all of our models work well with `model.save()`. If you want to get a SavedModel output, please try the `model.save_pretrained()` method with `saved_model=True`. You can see more about that method [here](https://huggingface.co/transformers/main_classes/model.html#transformers.TFPreTrainedModel.save_pretrained).
>
> If you still encounter the same problem when using `save_pretrained`, let me know and I'll try to reproduce the issue.
I try to save with SavedModelBuilder class in this way:
```
signature = tf.compat.v1.saved_model.predict_signature_def(
inputs={t.name: t for t in model.inputs},
outputs={t.name: t for t in model.output}
)
builder = tf.compat.v1.saved_model.Builder(export_path)
builder.add_meta_graph_and_variables(
sess=tf.compat.v1.keras.backend.get_session(),
tags=[tf.compat.v1.saved_model.tag_constants.SERVING],
signature_def_map = {'predict':signature},)
builder.save()
```
But it seems that the session is not the one in my model, and the .pb file is very small<|||||>> > 嗨@leisurehippo,并非我们所有的模型都能很好地与`model.save()`. 如果您想获得 SavedModel 输出,请尝试`model.save_pretrained()`使用`saved_model=True`. 您可以[在此处](https://huggingface.co/transformers/main_classes/model.html#transformers.TFPreTrainedModel.save_pretrained)查看有关该方法的更多信息。
> > 如果您在使用时仍然遇到同样的问题`save_pretrained`,请告诉我,我会尝试重现该问题。
>
> 我尝试以这种方式使用 SavedModelBuilder 类进行保存:
>
> ```
> signature = tf.compat.v1.saved_model.predict_signature_def(
> inputs={t.name: t for t in model.inputs},
> outputs={t.name: t for t in model.output}
> )
> builder = tf.compat.v1.saved_model.Builder(export_path)
> builder.add_meta_graph_and_variables(
> sess=tf.compat.v1.keras.backend.get_session(),
> tags=[tf.compat.v1.saved_model.tag_constants.SERVING],
> signature_def_map = {'predict':signature},)
> builder.save()
> ```
>
> 但是好像session不是我模型里的那个,.pb文件很小
hi, do you fix this problem? and how? thx.<|||||>I have the same problem? how to fix? |
transformers | 13,609 | closed | DataCollatorForTokenClassification numpy fix | Fixes the same issue as with the Seq2seq data collator yesterday, but in the TokenClassification data collator. This time I checked and made sure there weren't any others I missed! | 09-16-2021 15:59:44 | 09-16-2021 15:59:44 | |
transformers | 13,608 | closed | Fix a pipeline test with the newly updated weights | null | 09-16-2021 15:02:31 | 09-16-2021 15:02:31 | |
transformers | 13,607 | closed | `Trainer` loads model weights twice when resuming from checkpoint | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.0
- Platform:
- Python version: 3.8
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
@sgugger
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): `T5` (`t5-3b`)
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Try to do training with resumption from checkpoint
2. Observe that model weights are loaded twice
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The model weights are loaded once [here](https://github.com/huggingface/transformers/blob/v4.10.0/src/transformers/trainer.py#L260) as passed to the constructor, and then again [here](https://github.com/huggingface/transformers/blob/v4.10.0/src/transformers/trainer.py#L1074).
My proposal: I think the constructor should simply take the class of the model itself. No model weights are loaded until `self.train`. Inside `self.train`, the model weights path, optimizer path, etc are deduced using logic of whether `resume_from_checkpoint` is provided, `model_name_or_path`, etc
EDIT: I have also investigated the `model_init` param, but that seems to have the same problems, because it loads the model first [inside the constructor](https://github.com/huggingface/transformers/blob/v4.10.0/src/transformers/trainer.py#L296) and then again in `train` | 09-16-2021 14:45:02 | 09-16-2021 14:45:02 | On a similar note, why do we put the optimizer on cuda [here](https://github.com/huggingface/transformers/blob/v4.10.0/src/transformers/trainer.py#L1621)? Can we instead keep the optimizer on cpu? That would save more memory<|||||>Your proposal would be a major breaking change as the `Trainer` initialization wouldn't take the model anymore, and the model attribute of the `Trainer` would not be available after the init anymore. We don't do breaking changes between minor releases, and it might even be too big for a major release.
As for your question about the the optimizer, it needs to be on CUDA like the model to be able to perform the optimizer step. I invite you to remove the line and look for yourself the error message of mismatched devices you will get.
If you want to save memory this way, you can have a look at the ZeRO-offload integration with DeepSpeed, which does offload the optimzier stat and the gradients to the CPU.<|||||>Thanks for the quick reply. That's good to know; maybe we can live without the change.
>I invite you to remove the line and look for yourself the error message of mismatched devices you will get.
Somehow, my teammate @dblakely is able to remove the line with no error. Not sure why.
EDIT: the reason we're able to train without specifying map location, but not with, is that without specify the map location, the optimizer gets distributed. With map location, the optimizer gets sent to one gpu.
In short, I think the map_location shouldn't be specified at all.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,606 | closed | Bugfix to implement floor division (I replaced / with //) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Error encountered:
RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.
I just replaced / with //.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-16-2021 14:39:46 | 09-16-2021 14:39:46 | Didn't we recently change all `//` to ` ... / ).long()` @nreimers ?<|||||>Hi @lorenzobalzani
It was changed in: #13573
Starting with PyTorch 1.9, // creates a warning and might be removed in future PyTorch versions.
Which Pytorch version are you using that leads to this error? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I'm getting this error as well when running a summarization pipeline task<|||||>@Weilin37 what are your version numbers of Python, Pytorch and transformers? <|||||>> @Weilin37 what are your version numbers of Python, Pytorch and transformers?
Nevermind on this. I upgraded all my versions and it was fixed.<|||||>@Weilin37 Could you still tell what your previous versions were? So that I can have a look at which constellation this happens.<|||||>> @Weilin37 Could you still tell what your previous versions were? So that I can have a look at which constellation this happens.
I had transformers 4.9.0
torch 1.6.0
Python 3.8
I upgraded both transformers and torch to the latest version (as well as other packages that it depends on) and it seemed to work fine.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,605 | closed | Manual download of a pytorch_model.bin results in a zip file | I'm behind a firewall so that I cannot download models from a python script. However, I am able to download files directly from the hub, for example here https://huggingface.co/facebook/wav2vec2-base-100h.
The probleme I have is that the download of the `pytorch_model.bin` file results in a `.zip` file. I dont know what to do with this zip file and its content does not help either. I tried to simply rename it to `pytorch_model.bin` but of course I got errors when loading this pre_trained model.
So my question is how to download models and use them offline afterwards ? | 09-16-2021 14:38:13 | 09-16-2021 14:38:13 | is `git clone`ing the repo and then moving it around locally an option for you?<|||||>PS: FWIW a `pytorch_model.bin` file most of the times _is_ a zip file IIRC<|||||>`git` is not an option as it is unavailable on my machine and I am not allowed to install it.
And I suspected that the `pytorch_model.bin` were most of the time a `.zip` file, but simply renaming the downloaded file does not work here<|||||>I can maybe suggest to try to use [`huggingface_hub`](https://github.com/huggingface/huggingface_hub)'s `snapshot_download` method: https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub#snapshot_download<|||||>Thanks for the suggestion Julien.
In the mean time, I tried to download the model on another machine (that has proper access to internet so that I was able to load the model directly from the hub) and save it locally, then I transfered it to my problematic machine. I tried to load the model from this file but faces the same issue, so it might not be related to the file itself but to something else ...
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\conda\conda\envs\nlp\lib\tarfile.py in nti(s)
186 s = nts(s, "ascii", "strict")
--> 187 n = int(s.strip() or "0", 8)
188 except ValueError:
ValueError: invalid literal for int() with base 8: 'build_te'
During handling of the above exception, another exception occurred:
InvalidHeaderError Traceback (most recent call last)
~\AppData\Local\conda\conda\envs\nlp\lib\tarfile.py in next(self)
2288 try:
-> 2289 tarinfo = self.tarinfo.fromtarfile(self)
2290 except EOFHeaderError as e:
~\AppData\Local\conda\conda\envs\nlp\lib\tarfile.py in fromtarfile(cls, tarfile)
1094 buf = tarfile.fileobj.read(BLOCKSIZE)
-> 1095 obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)
1096 obj.offset = tarfile.fileobj.tell() - BLOCKSIZE
~\AppData\Local\conda\conda\envs\nlp\lib\tarfile.py in frombuf(cls, buf, encoding, errors)
1036
-> 1037 chksum = nti(buf[148:156])
1038 if chksum not in calc_chksums(buf):
~\AppData\Local\conda\conda\envs\nlp\lib\tarfile.py in nti(s)
188 except ValueError:
--> 189 raise InvalidHeaderError("invalid header")
190 return n
InvalidHeaderError: invalid header
During handling of the above exception, another exception occurred:
ReadError Traceback (most recent call last)
~\AppData\Local\conda\conda\envs\nlp\lib\site-packages\torch\serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)
555 try:
--> 556 return legacy_load(f)
557 except tarfile.TarError:
~\AppData\Local\conda\conda\envs\nlp\lib\site-packages\torch\serialization.py in legacy_load(f)
466
--> 467 with closing(tarfile.open(fileobj=f, mode='r:', format=tarfile.PAX_FORMAT)) as tar, \
468 mkdtemp() as tmpdir:
~\AppData\Local\conda\conda\envs\nlp\lib\tarfile.py in open(cls, name, mode, fileobj, bufsize, **kwargs)
1592 raise CompressionError("unknown compression type %r" % comptype)
-> 1593 return func(name, filemode, fileobj, **kwargs)
1594
~\AppData\Local\conda\conda\envs\nlp\lib\tarfile.py in taropen(cls, name, mode, fileobj, **kwargs)
1622 raise ValueError("mode must be 'r', 'a', 'w' or 'x'")
-> 1623 return cls(name, mode, fileobj, **kwargs)
1624
~\AppData\Local\conda\conda\envs\nlp\lib\tarfile.py in __init__(self, name, mode, fileobj, format, tarinfo, dereference, ignore_zeros, encoding, errors, pax_headers, debug, errorlevel, copybufsize)
1485 self.firstmember = None
-> 1486 self.firstmember = self.next()
1487
~\AppData\Local\conda\conda\envs\nlp\lib\tarfile.py in next(self)
2300 elif self.offset == 0:
-> 2301 raise ReadError(str(e))
2302 except EmptyHeaderError:
ReadError: invalid header
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
~\AppData\Local\conda\conda\envs\nlp\lib\site-packages\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1204 try:
-> 1205 state_dict = torch.load(resolved_archive_file, map_location="cpu")
1206 except Exception:
~\AppData\Local\conda\conda\envs\nlp\lib\site-packages\torch\serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
386 try:
--> 387 return _load(f, map_location, pickle_module, **pickle_load_args)
388 finally:
~\AppData\Local\conda\conda\envs\nlp\lib\site-packages\torch\serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)
559 # .zip is used for torch.jit.save and will throw an un-pickling error here
--> 560 raise RuntimeError("{} is a zip archive (did you mean to use torch.jit.load()?)".format(f.name))
561 # if not a tarfile, reset file offset and proceed
RuntimeError: C:\Users\xxxx\src\stt\model\wav2vec2-large-xlsr-53-french\pytorch_model.bin is a zip archive (did you mean to use torch.jit.load()?)
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
~\AppData\Local\Temp\1/ipykernel_12020/1473353111.py in <module>
1 # Initialize the model
2 # model = Wav2Vec2ForCTC.from_pretrained('facebook/wav2vec2-large-xlsr-53-french')
----> 3 model = Wav2Vec2ForCTC.from_pretrained(model_path)
~\AppData\Local\conda\conda\envs\nlp\lib\site-packages\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1206 except Exception:
1207 raise OSError(
-> 1208 f"Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' "
1209 f"at '{resolved_archive_file}'"
1210 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. "
OSError: Unable to load weights from pytorch checkpoint file for 'C:\Users\xxxx\src\stt\model\wav2vec2-large-xlsr-53-french' at 'C:\Users\xxxx\src\stt\model\wav2vec2-large-xlsr-53-french\pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```
Could it be due to the use of a "very" old version of pytorch ?<|||||>@gandroz could you share your environment information? You can do so by pasting the result of `transformers-cli env` in your terminal.
I think this may very well originate from the new zip-file system that `torch` uses, which was introduced in PyTorch 1.6.0. <|||||>I dug a little further, and as you mentionned, pytorch moved from pickle serialization to zip from version 1.6.0 but still supports leagacy one. However, old pytorch version are not forward-compatible of course. My pytorch version is quite old indeed, version 1.1.0 ouch !
```
- `transformers` version: 4.6.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.10
- PyTorch version (GPU?): 1.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```<|||||>You could do the conversion by loading the model in PyTorch v1.6.0, saving it using the old serialization system, and then loading it in 1.1.0<|||||>Good try, but the `gelu` activation function was added in the 1.5.0 version of pytorch so that my obsolete installed pytorch version is not able to construct the model. I could try to add it directly in code.... lol<|||||>Ah it's very possible that model only works with more recent versions of PyTorch indeed. We should do a better job of documenting which models are compatible with which version (which is on the roadmap).
Is there something blocking you from updating to a more recent version of PyTorch?<|||||>
> Is there something blocking you from updating to a more recent version of PyTorch?
Yes, security.... I must install packages from intern mirror in which pytorch for Windows in only in version 1.1
I can confirm that there is no problem with the last release.
|
transformers | 13,604 | closed | Properly use test_fetcher for examples | # What does this PR do?
This PR fixes the `test_fetcher` util script for examples. It does so by adding the example test file whenever an example file is modified, or if said example file is modified (anything in examples/pytorch).
To avoid this being launched by the other test jobs, the default for the filters arguments is changed to "tests".
An example modification will now run the example test file: on the commit "Fake example modification", the test fetcher for the example script returns:
```
Master is at 421929b556aedf022a1c4a1f3b2f116b14a7b88a
Current head is at 5ba342cbf98664f5a7a98d6f02b798d78ee523f3
Branching commit: 421929b556aedf022a1c4a1f3b2f116b14a7b88a
### DIFF ###
### MODIFIED FILES ###
- examples/pytorch/language-modeling/run_clm.py
- utils/tests_fetcher.py
### IMPACTED FILES ###
- examples/pytorch/language-modeling/run_clm.py
- utils/tests_fetcher.py
### TEST TO RUN ###
- examples/pytorch/test_examples.py
```
In practice the examples job is run when any modification triggers any tests (so it's run when there is a real modification in the examples or the source code of transformers). Before this PR, the example modifications were ignored. | 09-16-2021 14:33:59 | 09-16-2021 14:33:59 | |
transformers | 13,603 | closed | Adding license file to some of the fine-tuned models | Hi,
Some of the fine-tuned models for zero-shot classification are not listed with license file. More specifically, I am interested in using the "[Sahajtomar/German_Zeroshot](https://huggingface.co/Sahajtomar/German_Zeroshot)" model, which doesn't provide any license.
So, would it be possible to add a license file with details to this model?
Thanks | 09-16-2021 14:22:51 | 09-16-2021 14:22:51 | Pinging @Sahajtomar here :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,602 | closed | how to get 8 hidden layer in bertonnx | Hi i am using Bertonnx for inference how to get 8th hidden layer ouput from Bertonnx output
thankyou in advance | 09-16-2021 13:48:55 | 09-16-2021 13:48:55 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,601 | closed | Push to hub fails to "update ref" for GLUE example script | The `--push-to-hub` is throwing an error while running the `run_glue.py` script for FNet model. The checkpoints and runs are getting uploaded [here](https://huggingface.co/gchhablani/fnet-base-finetuned-mrpc/tree/main) but ideally, no errors should be observed.
## Environment info
- `transformers` version: 4.11.0.dev0
- Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.17
- Python version: 3.8.11
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Library:
- trainer: @sgugger
## Information
Model I am using (FNet): FNetForSequenceClassification (To Be Merged) #13045
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: MRPC
## To reproduce
Steps to reproduce the behavior:
1. Prepare a shell script:
```bash
#!/usr/bin/bash
python ../run_glue.py \
--model_name_or_path google/fnet-base \
--task_name mrpc \
--do_train \
--do_eval \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir fnet-base-finetuned-mrpc \
--push_to_hub \
--hub_strategy all_checkpoints \
--logging_steps 50 \
--logging_strategy steps \
--save_steps 50 \
--save_strategy steps \
--eval_steps 50 \
--evaluation_strategy steps \
```
2. Install from the FNet branch:
```bash
pip install git+https://github.com/gchhablani/transformers.git@add_fnet
```
3. Run the shell script:
```bash
bash mrpc.sh
```
Error thrown:
```python
Traceback (most recent call last):trick-general-gpu/events.out.tf
File "/home/gunjan/anaconda3/envs/fnet_env/lib/python3.8/site-packages/huggingface_hub/repository.py", line 967, in git_push [00
raise subprocess.CalledProcessError(
subprocess.CalledProcessError: Command '['git', 'push', '--set-upstream', 'origin', 'main']' returned non-zero exit status 1.
Upload file checkpoint-550/rng_state.pth: 100%|█| 14.2k/14.2k [00
During handling of the above exception, another exception occurred:load file checkpoint-600/rng_state.pth: 100%|█| 14.2k/14.2k [00
Traceback (most recent call last):trick-general-gpu/events.out.tf
File "../run_glue.py", line 566, in <module>
main()e checkpoint-550/scheduler.pt: 100%|█| 623/623 [00:21<?
File "../run_glue.py", line 557, in main
trainer.push_to_hub(**kwargs)ler.pt: 100%|█| 623/623 [00:21<?
File "/home/gunjan/anaconda3/envs/fnet_env/lib/python3.8/site-packages/transformers/trainer.py", line 2636, in push_to_hub0:21<?
git_head_commit_url = self.repo.push_to_hub(commit_message=commit_message, blocking=blocking)
File "/home/gunjan/anaconda3/envs/fnet_env/lib/python3.8/site-packages/huggingface_hub/repository.py", line 1052, in push_to_hub
return self.git_push(
File "/home/gunjan/anaconda3/envs/fnet_env/lib/python3.8/site-packages/huggingface_hub/repository.py", line 972, in git_push
raise EnvironmentError(exc.stderr)
OSError: remote: error: cannot lock ref 'refs/heads/main': is at 7e59d1c65a4cf5010f2445a0bad6199ea4dea34f but expected 31f6a4614c202fa2af7d677bca993b0bf4c1cab9
To https://huggingface.co/gchhablani/fnet-base-finetuned-mrpc
! [remote rejected] main -> main (failed to update ref)
error: failed to push some refs to 'https://user:[email protected]/gchhablani/fnet-base-finetuned-mrpc'
```
I tried deleting the repo from my user and running the script again, still the same issue happens. Is this because of background processes still trying to push during the next run?
## Expected behavior
The checkpoints and logs get pushed to hub without any issues/failures.
**Side Note**: I believe the content after `user:` is the access token? Should it be shown while throwing the error?
CC @patrickvonplaten | 09-16-2021 13:41:06 | 09-16-2021 13:41:06 | Thanks for flagging this. The content after `user:` is indeed your access token so I edited your message to remove it (opening an issue in parallel on huggingface_hub to make sure we remove those when displaying error).
It is possible that you got an error once and have some processes still trying to push in the background. If you start from scratch, does the issue persist?
cc @LysandreJik <|||||>@sgugger I believe that is what is happening.
I am not getting an error after waiting for a while and cleaning both local and remote copies.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,600 | closed | Feature Extractor: Wav2Vec2 & Speech2Text - Allow truncation + padding=longest | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
For training is speech it's quite important that we are able to make use of the `padding="longest"` *a.k.a* dynamic batching in PyTorch on GPU since speech inputs can vary from 2 seconds to up to 30 seconds. We also want to be able to use truncation with `padding="longest"` in such a case. This PR enables using truncation (*i.e.* cutting sequences to a max length of x seconds) with the `padding="longest"` mode.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-16-2021 11:44:57 | 09-16-2021 11:44:57 | |
transformers | 13,599 | closed | Installing transformer on docker | i am having issue with installing transformer on docker.
I keep getting an error
````python
If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain.
#10 64.47 ----------------------------------------
#10 64.47 ERROR: Failed building wheel for tokenizers
#10 64.47 Failed to build tokenizers
#10 64.47 ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
------
```` | 09-16-2021 11:16:31 | 09-16-2021 11:16:31 | What version of Python are you running?
What version of Transformers are you trying to install? <|||||>python:3.9.6
LATEST VERSION OF TRANSFORMER
On Thu, Sep 16, 2021 at 1:39 PM Dines Rae Selvig ***@***.***>
wrote:
> What version of Python are you running?
> What version of Transformers are you trying to install?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/13599#issuecomment-920864717>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AVANR6BRM6C3DZFY2R2SL7LUCHQPFANCNFSM5EEQNL7A>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
>
<|||||>Hi @boris-gloqal! If you have no rust support in your docker environment, you can install `transformers` without installing `tokenizers` as we have a soft dependency on `tokenizers`.
Here's what you can do:
```
pip install --no-dependencies transformers
pip install filelock huggingface-hub numpy packaging pyyaml regex requests tqdm sacremoses
```
hope that helps!<|||||>using command `docker build -t transformers:danny docker/transformers-pytorch-gpu/`
but errors
`Step 9/11 : COPY . transformers/
---> Using cache
---> 070c23b63b41
Step 10/11 : RUN cd transformers/ && python3 -m pip install --no-cache-dir .
---> Running in bc0fa62bc132
ERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
The command '/bin/sh -c cd transformers/ && python3 -m pip install --no-cache-dir .' returned a non-zero code: 1`
but command : `/bin/sh -c cd transformers/ && python3 -m pip install --no-cache-dir .` install Successfully
installing output:
` Stored in directory: /tmp/pip-ephem-wheel-cache-0sdcqgy7/wheels/d9/20/5d/847cef523b7823ae4e8f5182f8b00ce910d528187375600b44
Successfully built transformers
Installing collected packages: transformers
Attempting uninstall: transformers
Found existing installation: transformers 4.12.0.dev0
Uninstalling transformers-4.12.0.dev0:
Successfully uninstalled transformers-4.12.0.dev0
Successfully installed transformers-4.12.0.dev0`
why ?
can you help me ?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,598 | closed | Make assertions only if actually chunking forward | This moves the assertion on checking input dimensions into a block that will only be called if the function is actually going to do chunking forward. This is often not the case at inference time and PyTorch tracing a model with this assertion in it leads to a tracing warning.
```
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
input_tensor.shape[chunk_dim] == tensor_shape for input_tensor in input_tensors
``` | 09-16-2021 10:22:40 | 09-16-2021 10:22:40 | Asking @patrickvonplaten for review as it's on reformer<|||||>@LysandreJik This appears when tracing [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2), which doesn't seem to be a reformer 🤔 <|||||>Hey @joshdevins,
I'm happy with the PR! Can you rebase to current master - it should fix the failing tests I think. Also could you maybe replace the `assert` statement by `if ... raise ValueError`? We are trying to move away from assert statements :-) |
transformers | 13,597 | closed | seq2seq trainer gives OOM error while evaluating | Hi,
I'm trying t5-base for translation with source and target lengths of 320 and 256 respectively. I'm using Seq2SeqTrainer on A100-40GB GPU. For training, it is consuming not more than 20GB of GPU memory with batch_size of 8. But for evaluation with batch_size of 4, it is giving CUDA Out Of Memory error.
Later I figured that if I remove `compute_metrics` from trainer, everything is working fine. So could you plz suggest what's wrong with my below `compute_metrics` function ..?
```
def compute_metrics(pred):
sacrebleu = datasets.load_metric("metrics/sacrebleu.py")
labels_ids = pred.label_ids
pred_ids = pred.predictions
labels_ids[labels_ids == -100] = tokenizer.pad_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
pred_str = tokenizer.batch_decode(np.argmax(pred_ids[0],axis=-1), skip_special_tokens=True)
label_string = []
for l in label_str:
label_string.append([l])
sacrebleu_output = sacrebleu.compute(predictions=pred_str, references=label_string)
return {
"bleu": sacrebleu_output["score"]
}
``` | 09-16-2021 10:18:18 | 09-16-2021 10:18:18 | Hi, this could be because during evaluation `.generate` is used with beam search which uses more memory. Also by default, all predictions are stored on the device, so if the eval dataset is large it could result in OOM.
You could use `--eval_accumulation_steps` argument if it's passed the predictions are offloaded on CPU after every `eval_accumulation_steps` which reduced GPU memory.<|||||>Thanks a lot @patil-suraj , it worked! |
transformers | 13,596 | closed | Inconsistency in class naming | @patrickvonplaten
in `modeling_utils.py` the pretrained model class is written as `PreTrainedModel` While in `configuration_utils` the pretrained config class is named as `PretrainedConfig`
https://github.com/huggingface/transformers/blob/421929b556aedf022a1c4a1f3b2f116b14a7b88a/src/transformers/modeling_utils.py#L413
https://github.com/huggingface/transformers/blob/421929b556aedf022a1c4a1f3b2f116b14a7b88a/src/transformers/modeling_utils.py#L30
and not just that, in `transformers` there's `PretrainedFSMTModel, PretrainedBartModel, PreTrainedModel, PretrainedConfig, PreTrainedTokenizer`
and some part of the documentations
https://github.com/huggingface/transformers/blob/421929b556aedf022a1c4a1f3b2f116b14a7b88a/src/transformers/modeling_utils.py#L425-L433
Edit:
Also in the `transformers.__init__.py` there's this type of inconsistency
example:
https://github.com/huggingface/transformers/blob/421929b556aedf022a1c4a1f3b2f116b14a7b88a/src/transformers/__init__.py#L728
https://github.com/huggingface/transformers/blob/421929b556aedf022a1c4a1f3b2f116b14a7b88a/src/transformers/__init__.py#L580
| 09-16-2021 10:10:24 | 09-16-2021 10:10:24 | Hey @sadakmed,
Very good observation! We are aware of these slightly different names :-) However it would create quite some problems with backward compatibility so that we decided to leave as is. Hope this is ok for you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> However it would create quite some problems
@patrickvonplaten Totally agree, it's ok for me anyway!
I would only suggest to keep it in mind for the new upcoming models |
transformers | 13,595 | closed | run_summarization.py freezes | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.2
- Platform: linux
- Python version: 3.8
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
t5: @patrickvonplaten, @patil-suraj
## Information
I am running official run_summarization.py on A100, and this freezes for me after this line
```
09/16/2021 09:43:45 - INFO - datasets.utils.download_manager - Downloading took 0.0 min
09/16/2021 09:43:48 - INFO - datasets.utils.download_manager - Checksum Computation took 0.0 min
0%| | 0/5 [00:00<?, ?it/s]
```
## To reproduce
```
python run_seq2seq.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir temp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
I have no idea why this is freezing and I am running the original codes no modifications. I appreciate your help a lot. thanks
## Expected behavior
codes to be run | 09-16-2021 09:48:27 | 09-16-2021 09:48:27 | Not really sure about this. Looks like it's stuck at processing the dataset. Could you maybe see if you can just load the dataset? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,594 | closed | Improve tokenizer tests | # What does this PR do?
Improve tokenizer common tests in `tests/test_tokenization_common.py`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-16-2021 08:31:06 | 09-16-2021 08:31:06 | Pinging @SaulLu who has worked on this in the past<|||||>I'll take the liberty of pinging @Narsil if he can give any leads on how to unblock failed tests on pipelines.<|||||>I believe the pipeline tests are currently passing?<|||||>Hi @SaulLu could you review all the changes are good to you when you're back?<|||||>So that we don't get lost, this PR is awaiting the outcome of the PR #13930. :slightly_smiling_face: <|||||>@SaulLu All tests passed. Thanks. |
transformers | 13,593 | closed | Correct device when resizing position embeddings | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
By default the device for the resized position embeddings was CPU. This PR changes this to be `self.device`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-16-2021 07:30:15 | 09-16-2021 07:30:15 | |
transformers | 13,592 | closed | distributed training not starting | ## 🐛 Bug
I'm using Huggingface's Transformers package, trying to start distributed training. But it is not starting as I run the distributed script.
## To Reproduce
```
export CUDA_VISIBLE_DEVICES=0,1
python -m torch.distributed.launch --nproc_per_node=2 examples/pytorch/summarization/run_summarization.py \
--model_name_or_path facebook/bart-large \
--do_predict \
--output_dir $SAVE_MODEL_DIR/saved_models/$MODEL/$M_ID \
--per_device_train_batch_size=1 \
--per_device_eval_batch_size=8 \
--learning_rate 3e-5 \
--weight_decay 0.01 \
--adam_beta2 0.98 \
--num_train_epochs 10 \
--overwrite_output_dir \
--evaluation_strategy steps --eval_steps 2000 --save_steps 2000 --warmup_steps 7000 --logging_steps 100 \
--text_column intro \
--summary_column summary \
--train_file $DS_BASE_DIR/train.json \
--validation_file $DS_BASE_DIR/val.json \
--test_file $DS_BASE_DIR/test.json \
--predict_with_generate
```
Steps to reproduce the behavior:
1. Running the provided script according to the instructions.
2. The code just stops working without any progress.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
I also tried monitoring the NCCL behaviour using `NCCL_DEBUG=INFO` flag. This is the output after running the script:
```
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
brunello:91348:91348 [0] NCCL INFO Bootstrap : Using [0]enp8s0:192.168.10.200<0> [1]br-494482ca770a:172.18.0.1<0>
brunello:91348:91348 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
brunello:91348:91348 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
brunello:91348:91348 [0] NCCL INFO NET/Socket : Using [0]enp8s0:192.168.10.200<0> [1]br-494482ca770a:172.18.0.1<0>
brunello:91348:91348 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda9.2
brunello:91349:91349 [1] NCCL INFO Bootstrap : Using [0]enp8s0:192.168.10.200<0> [1]br-494482ca770a:172.18.0.1<0>
brunello:91349:91349 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
brunello:91349:91349 [1] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
brunello:91349:91349 [1] NCCL INFO NET/Socket : Using [0]enp8s0:192.168.10.200<0> [1]br-494482ca770a:172.18.0.1<0>
brunello:91349:91349 [1] NCCL INFO Using network Socket
brunello:91348:91394 [0] NCCL INFO Channel 00/02 : 0 1
brunello:91349:91395 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
brunello:91348:91394 [0] NCCL INFO Channel 01/02 : 0 1
brunello:91349:91395 [1] NCCL INFO Trees [0] -1/-1/-1->1->0|0->1->-1/-1/-1 [1] -1/-1/-1->1->0|0->1->-1/-1/-1
brunello:91348:91394 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
brunello:91348:91394 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1
brunello:91348:91394 [0] NCCL INFO Setting affinity for GPU 0 to ffffff
brunello:91349:91395 [1] NCCL INFO Channel 00 : 1[42000] -> 0[a000] via P2P/IPC
brunello:91348:91394 [0] NCCL INFO Channel 00 : 0[a000] -> 1[42000] via P2P/IPC
brunello:91348:91394 [0] NCCL INFO Channel 01 : 0[a000] -> 1[42000] via P2P/IPC
brunello:91349:91395 [1] NCCL INFO Channel 01 : 1[42000] -> 0[a000] via P2P/IPC
brunello:91348:91394 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
brunello:91349:91395 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
brunello:91348:91394 [0] NCCL INFO comm 0x7f1428001010 rank 0 nranks 2 cudaDev 0 busId a000 - Init COMPLETE
brunello:91349:91395 [1] NCCL INFO comm 0x7f8420001010 rank 1 nranks 2 cudaDev 1 busId 42000 - Init COMPLETE
brunello:91348:91348 [0] NCCL INFO Launch mode Parallel
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
```
Collecting environment information...
PyTorch version: 1.7.1
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.1 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: version 3.12.4
Libc version: glibc-2.7
Python version: 3.8.8
Python platform: Linux-4.15.0-43-generic-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: Yes
CUDA runtime version: 10.1
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
Nvidia driver version: 440.33.01
cuDNN version: /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7
Versions of relevant libraries:
[pip] numpy==1.16.5
[conda] blas 1.0 mkl
[conda] cudatoolkit 9.2 0
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py38h27cfd23_1
[conda] mkl_fft 1.3.0 py38h42c9631_2
[conda] mkl_random 1.2.1 py38ha9443f7_2
[conda] mypy_extensions 0.4.3 py38_0
[conda] numpy 1.20.1 py38h93e21f0_0
[conda] numpy-base 1.20.1 py38h7d8b39e_0
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] pytorch 1.7.1 py3.8_cuda9.2.148_cudnn7.6.3_0 pytorch
[conda] torchaudio 0.7.2 py38 pytorch
[conda] torchvision 0.8.2 py38_cu92 pytorch
```
- PyTorch Version (e.g., 1.0): 1.7.1
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): conda
- Python version: 3.8.8
- CUDA/cuDNN version: 10.2
- Any other relevant information: I tried Pytorch 1.7 with CUDA versions: 9.2, 10.1, and 10.2 but got the same sticking behaviour I mentioned.
cc blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
| 09-16-2021 04:39:23 | 09-16-2021 04:39:23 | I could solve the problem by switching to PyTorch version 1.6 with CUDA 10.2. I guess that might be because of the NCCL version incompatibility in my OS environment. So closing this issue as it's been solved. |
transformers | 13,591 | closed | ImportError: cannot import name 'auto_class_factory' | I installed layoutlmft from https://github.com/microsoft/unilm/tree/master/layoutlmft
and it import auto_class_factory in the source code of __init__.py:
```
from transformers.models.auto.modeling_auto import auto_class_factory
.....xxx...
AutoModelForTokenClassification = auto_class_factory(
"AutoModelForTokenClassification", MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, head_doc="token classification"
)
AutoModelForRelationExtraction = auto_class_factory(
"AutoModelForRelationExtraction", MODEL_FOR_RELATION_EXTRACTION_MAPPING, head_doc="relation extraction"
)
```
After I upgraded the transformers from v4.5.1 to v4.10.0.
And when I run the code as follow:
`from transformers.models.auto.modeling_auto import auto_class_factory`
I got the error below:
```
ImportError: cannot import name 'auto_class_factory' from 'transformers.models.auto.modeling_auto'
(xxxxx/envs/huggingface/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py)
```
I couldn't find auto_class_factory in the source code and documents in v4.10.0.
How can I fixed this problem? Thanks. | 09-16-2021 04:02:37 | 09-16-2021 04:02:37 | I fixed it by changing `auto_class_factory` to the new function `auto_class_update` and slightly adjusting the relevant code segments in **init.py**:
```
try:
from transformers.models.auto.modeling_auto import auto_class_factory
except:
from transformers.models.auto.modeling_auto import _BaseAutoModelClass, auto_class_update
```
```
try:
AutoModelForTokenClassification = auto_class_factory(
"AutoModelForTokenClassification", MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, head_doc="token classification")
except:
cls = types.new_class("AutoModelForTokenClassification", (_BaseAutoModelClass,))
cls._model_mapping = MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING
cls.__name__ = "AutoModelForTokenClassification"
AutoModelForTokenClassification = auto_class_update(cls, head_doc="token classification")
```<|||||>> I fixed it by changing `auto_class_factory` to the new function `auto_class_update` and slightly adjusting the relevant code segments in **init.py**:
>
> ```
> try:
> from transformers.models.auto.modeling_auto import auto_class_factory
> except:
> from transformers.models.auto.modeling_auto import _BaseAutoModelClass, auto_class_update
> ```
>
> ```
> try:
> AutoModelForTokenClassification = auto_class_factory(
> "AutoModelForTokenClassification", MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, head_doc="token classification")
> except:
> cls = types.new_class("AutoModelForTokenClassification", (_BaseAutoModelClass,))
> cls._model_mapping = MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING
> cls.__name__ = "AutoModelForTokenClassification"
>
> AutoModelForTokenClassification = auto_class_update(cls, head_doc="token classification")
> ```
Thanks for your reply!
It seems that the new version(v4.10.0) of transformers don't support AutoModelForRelationExtraction.
It didn't work when I add the code below:
```
cls = types.new_class("AutoModelForRelationExtraction", (_BaseAutoModelClass,))
cls._model_mapping = MODEL_FOR_RELATION_EXTRACTION_MAPPING
cls.__name__ = "AutoModelForRelationExtraction"
AutoModelForRelationExtraction = auto_class_update(cls, head_doc="relation extraction")
```
The error is :
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "xxxx/unilm/layoutlmft/layoutlmft/__init__.py", line 66, in <module>
AutoModelForRelationExtraction = auto_class_update(cls, head_doc="relation extraction")
File "xxxx/envs/huggingface/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 420, in auto_class_update
from_config = replace_list_option_in_docstrings(model_mapping._model_mapping, use_model_types=False)(from_config)
AttributeError: 'collections.OrderedDict' object has no attribute '_model_mapping'
```<|||||>You will need to do the following instead:
```
class AutoModelForRelationExtraction(_BaseAutoModelClass):
_model_mapping = MODEL_FOR_RELATION_EXTRACTION_MAPPING
AutoModelForRelationExtraction = auto_class_update(AutoModelForRelationExtraction, head_doc="relation extraction")
```<|||||>> You will need to do the following instead:
>
> ```
> class AutoModelForRelationExtraction(_BaseAutoModelClass):
> _model_mapping = MODEL_FOR_RELATION_EXTRACTION_MAPPING
>
>
> AutoModelForRelationExtraction = auto_class_update(AutoModelForRelationExtraction, head_doc="relation extraction")
> ```
Hi @sgugger , I did the instead, but it did fix this problem, the error still exists.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||> @sz-lcw,
This code is still broken in version 4.12.5 so here is the fix that sgugger meant. You will need to change the code to be this:
(Thanks to @alromb for doing most of the work and of course sgugger.)
```
....
try:
from transformers.models.auto.modeling_auto import auto_class_factory
except:
from transformers.models.auto.modeling_auto import _BaseAutoModelClass, auto_class_update
..
...
...
try:
AutoModelForTokenClassification = auto_class_factory(
"AutoModelForTokenClassification", MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, head_doc="token classification")
except:
cls = types.new_class("AutoModelForTokenClassification", (_BaseAutoModelClass,))
cls._model_mapping = MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING
cls.__name__ = "AutoModelForTokenClassification"
AutoModelForTokenClassification = auto_class_update(cls, head_doc="token classification")
try:
AutoModelForRelationExtraction = auto_class_update(
"AutoModelForRelationExtraction", MODEL_FOR_RELATION_EXTRACTION_MAPPING, head_doc="relation extraction")
except:
cls = types.new_class("AutoModelForRelationExtraction", (_BaseAutoModelClass,))
cls._model_mapping = MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING
cls.__name__ = "AutoModelForRelationExtraction"
AutoModelForRelationExtraction = auto_class_update(cls, head_doc="relation extraction")
```
<|||||>> try:
> AutoModelForTokenClassification = auto_class_factory(
> "AutoModelForTokenClassification", MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, head_doc="token classification")
> except:
> cls = types.new_class("AutoModelForTokenClassification", (_BaseAutoModelClass,))
> cls._model_mapping = MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING
> cls.__name__ = "AutoModelForTokenClassification"
>
> AutoModelForTokenClassification = auto_class_update(cls, head_doc="token classification")
>
> try:
> AutoModelForRelationExtraction = auto_class_update(
> "AutoModelForRelationExtraction", MODEL_FOR_RELATION_EXTRACTION_MAPPING, head_doc="relation extraction")
> except:
> cls = types.new_class("AutoModelForRelationExtraction", (_BaseAutoModelClass,))
> cls._model_mapping = MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING
> cls.__name__ = "AutoModelForRelationExtraction"
>
> AutoModelForRelationExtraction = auto_class_update(cls, head_doc="relation extraction")
thank you so much. this fixed the problem.<|||||>try:
AutoModelForRelationExtraction = auto_class_update(
"AutoModelForRelationExtraction", MODEL_FOR_RELATION_EXTRACTION_MAPPING, head_doc="relation extraction")
except:
cls = types.new_class("AutoModelForRelationExtraction", (_BaseAutoModelClass,))
cls._model_mapping = **MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING**
cls.name = "AutoModelForRelationExtraction"
In relation also you have used a MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING that's the reason it's working.
We have to map it with MODEL_FOR_RELATION_EXTRACTION_MAPPING.
Please type this
try:
AutoModelForRelationExtraction = auto_class_update(
"AutoModelForRelationExtraction", MODEL_FOR_RELATION_EXTRACTION_MAPPING, head_doc="relation extraction")
except:
cls = types.new_class("AutoModelForRelationExtraction", (_BaseAutoModelClass,))
cls._model_mapping = MODEL_FOR_RELATION_EXTRACTION_MAPPING
cls.__name__ = "AutoModelForRelationExtraction"
AutoModelForRelationExtraction = auto_class_update(cls, head_doc="relation extraction")
check error is still there.
<|||||>Try the following code and see whether it works or not
```python
try:
AutoModelForRelationExtraction = auto_class_update(
"AutoModelForRelationExtraction",
MODEL_FOR_RELATION_EXTRACTION_MAPPING,
head_doc="relation extraction",
)
except:
class StupidProxy:
_model_mapping = MODEL_FOR_RELATION_EXTRACTION_MAPPING
cls = types.new_class("AutoModelForRelationExtraction", (_BaseAutoModelClass,))
cls._model_mapping = StupidProxy
cls.__name__ = "AutoModelForRelationExtraction"
AutoModelForRelationExtraction = auto_class_update(
cls, head_doc="relation extraction"
)
```<|||||>> Try the following code and see whether it works or not
>
> ```python
> try:
> AutoModelForRelationExtraction = auto_class_update(
> "AutoModelForRelationExtraction",
> MODEL_FOR_RELATION_EXTRACTION_MAPPING,
> head_doc="relation extraction",
> )
> except:
>
> class StupidProxy:
> _model_mapping = MODEL_FOR_RELATION_EXTRACTION_MAPPING
>
> cls = types.new_class("AutoModelForRelationExtraction", (_BaseAutoModelClass,))
> cls._model_mapping = StupidProxy
> cls.__name__ = "AutoModelForRelationExtraction"
>
> AutoModelForRelationExtraction = auto_class_update(
> cls, head_doc="relation extraction"
> )
> ```
lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 444, in from_pretrained
elif type(config) in cls._model_mapping.keys():
AttributeError: type object 'StupidProxy' has no attribute 'keys'
this code also occur error
<|||||>> > Try the following code and see whether it works or not
> > ```python
> > try:
> > AutoModelForRelationExtraction = auto_class_update(
> > "AutoModelForRelationExtraction",
> > MODEL_FOR_RELATION_EXTRACTION_MAPPING,
> > head_doc="relation extraction",
> > )
> > except:
> >
> > class StupidProxy:
> > _model_mapping = MODEL_FOR_RELATION_EXTRACTION_MAPPING
> >
> > cls = types.new_class("AutoModelForRelationExtraction", (_BaseAutoModelClass,))
> > cls._model_mapping = StupidProxy
> > cls.__name__ = "AutoModelForRelationExtraction"
> >
> > AutoModelForRelationExtraction = auto_class_update(
> > cls, head_doc="relation extraction"
> > )
> > ```
>
> lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 444, in from_pretrained elif type(config) in cls._model_mapping.keys(): AttributeError: type object 'StupidProxy' has no attribute 'keys'
>
> this code also occur error
Finally I fixed it by modifying the source code.
see the `auto_class_update` function in transformes/models/auto/auto_factory.py https://github.com/huggingface/transformers/blob/d53b8ad78077bbcebf6727fce0280fea26e29c7f/src/transformers/models/auto/auto_factory.py#L606
change
```python
from_config = replace_list_option_in_docstrings(model_mapping._model_mapping, use_model_types=False)(from_config)
```
to
```python
from_config = replace_list_option_in_docstrings(model_mapping, use_model_types=False)(from_config)
```
|
transformers | 13,590 | closed | Unrecognized configuration class <class 'transformers.models.distilbert.configuration_distilbert.DistilBertConfig'> for this kind of AutoModel: AutoModelForSeq2SeqLM. | I'm trying to use translate notebook with model_checkpoint = "distilbert-base-uncased" but it gives me below error:

any idea what is wrong?
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: distilbert
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 09-16-2021 01:41:31 | 09-16-2021 01:41:31 | Hello! DistilBERT is not a seq2seq model, hence why there's no seq2seq architecture for that model! |
transformers | 13,589 | closed | [ci] nightly: add deepspeed master | We recently saw bugs introduced in Deepspeed master that were caught only on their new release. Which creates a gap in time where deepspeed integration doesn't work. To avoid that let's also test deepspeed master with the pytorch-nightly as there both pushing the bleeding edge.
Additionally this PR fixes the nightly pip install line that was misplaced originally. and pushes it first, before any other installs. Otherwise we end up first installing the current pytorch via dependencies of sub-packages, and then again the nightly.
@LysandreJik | 09-16-2021 00:10:51 | 09-16-2021 00:10:51 | |
transformers | 13,588 | closed | Add system-wide requirements to 'transformers-cli env' documentation | # What does this PR do?
There are requirements that PIL and soundfile have on system-wide dependencies in order to run the command `transformer-cli env`.
This PR adds notes to the documentation about how to solve such system-wide dependencies below:
* Provide command to fix when sndfile library is not found
* Provide command to fix when the required GLIBCXX version is not found or installed
Fixes # 13270
## Before submitting
- [ X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
It was discussed here: https://github.com/huggingface/transformers/issues/13270
- [ X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
I changed only the documentation file needed.
- [ ] Did you write any new necessary tests?
No applicable to this PR since since it was documentation update only.
## Who can review?
@LysandreJik @sgugger | 09-15-2021 23:24:14 | 09-15-2021 23:24:14 | Thank you, @LysandreJik . I opened a new issue considering your feedback.
Should I close this PR?
Thank you!<|||||>Yes, this can be closed. Thank you, @merleyc! |
transformers | 13,587 | closed | [deepspeed] replaced deprecated init arg | deepspeed 0.5.2 deprecated `config` in `zero.Init()` in favor of `config_dict_or_path` - deepspeed 0.5.3 fixed a bug in the previous version related to that https://github.com/microsoft/DeepSpeed/pull/1373. So this PR updates the arg and ups the minimal requirement.
0.5.3 fixes a bug in `zero_to_fp32.py` introduced by 0.5.2
The ci failure is unrelated.
@sgugger, @LysandreJik
| 09-15-2021 23:11:39 | 09-15-2021 23:11:39 | |
transformers | 13,586 | closed | Fix make fix-copies with type annotations | # What does this PR do?
As pointed out by #13583, there is a bug in the current check_copies utils when a function has a very long signature and a type annotation. This was due to a regex only checking for `):` instead of `):` or `) -> type:`. This PR fixes that and pushes some changes due to divergences that went undetected.
Fixes #13583 | 09-15-2021 22:58:35 | 09-15-2021 22:58:35 | @patrickvonplaten Can you double-check the changes in HuBERT are good?<|||||>@sgugger I pulled this PR and found there is 1 edge case remained:
https://github.com/huggingface/transformers/blob/95f933ea855bce0c18a665f7a6a3b8ae9ab11739/src/transformers/models/bert/modeling_tf_bert.py#L902
Change the above line to
```
def serving_output(self, output: TFBaseModelOutputWithPooling):
```
or
```
def serving_output(self, inputs, output: TFBaseModelOutputWithPooling) -> TFBaseModelOutputWithPooling:
```
or
```
def serving_output(self, output2: TFBaseModelOutputWithPooling) -> TFBaseModelOutputWithPooling:
```
Currently (with the work in this PR), these changes won't be copied to
https://github.com/huggingface/transformers/blob/95f933ea855bce0c18a665f7a6a3b8ae9ab11739/src/transformers/models/roberta/modeling_tf_roberta.py#L762
<|||||>Ok for HuBERT<|||||>@ydshieh This is not a edge case but the way the check copies utility has been implemented. It does not check for the class name, function name, just the code inside. This is why the introduction line is ignored in the check.<|||||>@sgugger , Thanks, I understand now. However, I personally feel this is a bit strange. In my work on encoder-decoder, I need to add cross-attention and cache mechanism to some models, and these involve changing some return value types.
Like `TFBaseModelOutput` to `TFBaseModelOutputWithPastAndCrossAttentions`.
And the way `check copies` is implemented won't copy this kind of changes (single-line function name & signature) as shown in the provided example.
Moreover, if the function name & signature have multiple lines, the added or removed parameters will be copied (suppose they are not in the same line as the function name). This is somehow inconsistent with the single-line case above.
I will let Hugging Face to decide if it is necessary to address this :) |
transformers | 13,585 | closed | [Tests] Disable flaky s2t test | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Flaky test is completely disabled for now. Should be solved by https://github.com/huggingface/transformers/issues/13539
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-15-2021 21:46:20 | 09-15-2021 21:46:20 | |
transformers | 13,584 | closed | Option to serialize tokenizers in memory instead of to a directory | # 🚀 Feature request
Hello!
It would be useful to serialize tokenizers to bytes in memory instead of having to write to a directory. [`PreTrainedModel.save_pretrained`](https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.save_pretrained) only takes in a `save_directory`, where ideally it would also accept a [`BytesIO`](https://docs.python.org/3/library/io.html#io.BytesIO) or similar to serialize in memory (or there would be another method to call for this).
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
We're trying to integrate `transformers` with some existing software that handles signing serialized models for security. We're able to save to a directory on the local filesystem and then read the bytes back into memory, but it would be nice to avoid that and serialize directly to an in-memory BytesIO object. Also, this would expose a more flexible API that would make it easier for others using this great package to export their models.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
Unfortunately, I'm not familiar with the internals of the package and just wanted to see the feasibility of this. I'm happy to help however I can, especially if the change wasn't too involved. Thank you! | 09-15-2021 20:22:06 | 09-15-2021 20:22:06 | Hi @justinaustin, how about pickling it?
The following code works for me:
```
import pickle
import torch
import transformers
from transformers import BertTokenizer, BertModel
print('PyTorch version:', torch.__version__, ' transformers version:', transformers.__version__)
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", additional_special_tokens=["[SP]"])
ids_1 = tokenizer("Hello World[SP]!")
pickled_tokenizer = pickle.dumps(tokenizer)
tokenizer = pickle.loads(pickled_tokenizer)
ids_2 = tokenizer("Hello World[SP]!")
print(ids_1 == ids_2) # Output: True
```<|||||>Thank you for the suggestion @qqaatw ! Pickling is a great short-term solution for us. We hesitate to use pickle long-term because of brittleness between different Python versions and potential security concerns when deserializing.
It's just unfortunate that there's already a great serialization framework within transformers that takes care serializing to well-defined json and text files but you're unable to use if you want to avoid writing intermediate files to disk.
<|||||>I understand your concern. However, I think currently pickling is the only way to serialize the entire tokenizer as a byte object, and this approach is tested in the unit test of `transformers`. Additionally, you may find that the loading and saving utilities of PyTorch also leverage `pickle` package to process serialization so I believe pickling still remain somewhat of reliabilities to complete these general things.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,583 | closed | fix-copies doesn't work well in some cases | ## Environment info
- `transformers` version: 4.11.0.dev0
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.5
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@sgugger @patrickvonplaten @LysandreJik
## Information
When I worked on `TFEncoderDecoderModel`, I found in some cases, `fix-copies` doesn't work well. See below
## To reproduce
Steps to reproduce the behavior:
Case 1:
a. Put `inputs2 = inputs` after this line in `modeling_tf_bert.py` (just a dummy change to show the issues).
https://github.com/huggingface/transformers/blob/95f933ea855bce0c18a665f7a6a3b8ae9ab11739/src/transformers/models/bert/modeling_tf_bert.py#L628
b. run `make fix-copies`, you will find nothing is changed. However, `TFRoBertaMainLayer.call` indicates it is copied from `TFBertMainLayer.call`, as shown
https://github.com/huggingface/transformers/blob/95f933ea855bce0c18a665f7a6a3b8ae9ab11739/src/transformers/models/roberta/modeling_tf_roberta.py#L477
Therefore, the (dummy) change on `TFBertMainLayer.call` (1a) is not applied to `TFRoBertaMainLayer.call`.
Case 2:
c. If we continue, and also change
https://github.com/huggingface/transformers/blob/95f933ea855bce0c18a665f7a6a3b8ae9ab11739/src/transformers/models/bert/modeling_tf_bert.py#L613
to
```
):
```
and run `make fix-copies`, `TFRoBertaMainLayer.call` will be modified, but the result is very strange. See the attached file.
[results-2c.txt](https://github.com/huggingface/transformers/files/7172805/results-2c.txt)
Case 3:
d. Revert the changes in `TFRoBertaMainLayer.call` (made by `fix-copies`), and keep the changes on `TFBertMainLayer.call` (1a & 2c). Now change
https://github.com/huggingface/transformers/blob/95f933ea855bce0c18a665f7a6a3b8ae9ab11739/src/transformers/models/roberta/modeling_tf_roberta.py#L491
to
```
):
```
(so it becomes the same as L613 in `modeling_tf_bert.py`).
Run `make fix-copies`, you will see the change in `1a` is correctly copied to `TFRoBertaMainLayer.call`
## Expected behavior
It seems that type hints like `) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: ` doesn't work well with `fix-copies`.
Furthermore, single-line or multiple-lines will give different issues. For examples, changes on
https://github.com/huggingface/transformers/blob/95f933ea855bce0c18a665f7a6a3b8ae9ab11739/src/transformers/models/bert/modeling_tf_bert.py#L902
I expect the changes on `modeling_tf_bert.py` will be correctly copied to `modeling_tf_roberta.py`. | 09-15-2021 19:46:24 | 09-15-2021 19:46:24 | |
transformers | 13,582 | closed | Fix DataCollatorForSeq2Seq when labels are supplied as Numpy array instead of list | Found a bug in my data collator code - the Seq2Seq collator doesn't work unless labels are supplied as a list, and it fails if they're passed as an array. | 09-15-2021 18:03:02 | 09-15-2021 18:03:02 | |
transformers | 13,581 | closed | gpt-j input shape after finetuning | Hello after finetuning the EleutherAI/gpt-j-6B model and loading one of the checkpoints I'm getting the following error:
size mismatch for transformer.wte.weight: copying a param with shape torch.Size([50257, 4096]) from checkpoint, the shape in current model is torch.Size([50400, 4096])
Based on the GPT-J description there are some extra tokens beyond what the base GPT2 tokenizer has. Running the normal EleutherAI/gpt-j-6B model and tokenizer works fine, so perhaps the config that's being saved with the checkpoint is not accounting for those extra tokens? I pulled master this morning so I believe this is still the case with the current codebase. Training script is:
deepspeed run_clm.py \
--model_name_or_path EleutherAI/gpt-j-6B \
--train_file $TRAIN_FILE \
--validation_file $VAL_FILE \
--gradient_accumulation_steps 8 \
--num_train_epochs 2.0 \
--evaluation_strategy epoch \
--save_steps XXX \
--logging_steps XXX \
--do_train \
--do_eval \
--save_total_limit 3 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--deepspeed "deepspeed_zero3_config.json" \
--overwrite_output_dir \
--output_dir gpt-j_model
test script breaks at model loading:
from transformers import GPTJForCausalLM, AutoTokenizer
import torch
from pytictoc import TicToc
t = TicToc()
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = GPTJForCausalLM.from_pretrained("gpt-j_end2end/checkpoint-XXX", torch_dtype=torch.float16,max_length=2048).to(device)
print("model loaded")
...
Any help would be appreciated. Thanks!
| 09-15-2021 17:00:49 | 09-15-2021 17:00:49 | Hi @calderma
Thank you for opening the issue. The actual vocab size is 50257 the remaining are extra tokens.
The issue is that the vocab size is `50400` in the `config` file, but the tokenizer size is 50257, and the `run_clm.py` script, resizes token embeddings here
https://github.com/huggingface/transformers/blob/421929b556aedf022a1c4a1f3b2f116b14a7b88a/examples/pytorch/language-modeling/run_clm.py#L358
which reduces the model's embedding size to 50257, hence the shape mismatch.
@LysandreJik should we remove the extra tokens in the embedding and update the model on the hub?<|||||>Ok is there a way around this issue by manually changing the config.json or something or should I just wait until this gets updated on the hub?<|||||>Yes, if you have already fine-tuned a model, then you should change the `vocab_size` in `config.json` to 50257.
The fix is on the way #13617
<|||||>@patil-suraj but in fine-tuned model config.json file vocab_size is already 50257 :) <|||||>yes I noticed that as well. I got it to run by doing:
from transformers import GPTNeoForCausalLM, AutoTokenizer, GPT2Tokenizer, GPTJForCausalLM
import torch
from pytictoc import TicToc
t = TicToc()
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = GPTJForCausalLM.from_pretrained("gpt-j_model",ignore_mismatched_sizes=True,torch_dtype=torch.float16).to(device)
print("model loaded")
tokenizer = AutoTokenizer.from_pretrained("gpt-j_model",truncation=True)
model.resize_token_embeddings(len(tokenizer))
but then it output nonesense so I don't think the tokenization is being handled correctly when I do that. I also tried passing the base model as the tokenizer and the gpt2 tokenizer but they all output nosense.<|||||>> @patil-suraj but in fine-tuned model config.json file vocab_size is already 50257 :)
If that is the case, then the model should load without any issues, as the shapes would match.<|||||>> yes I noticed that as well. I got it to run by doing:
>
> from transformers import GPTNeoForCausalLM, AutoTokenizer, GPT2Tokenizer, GPTJForCausalLM
> import torch
> from pytictoc import TicToc
> t = TicToc()
>
> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>
> model = GPTJForCausalLM.from_pretrained("gpt-j_model",ignore_mismatched_sizes=True,torch_dtype=torch.float16).to(device)
>
> print("model loaded")
>
> tokenizer = AutoTokenizer.from_pretrained("gpt-j_model",truncation=True)
> model.resize_token_embeddings(len(tokenizer))
>
> but then it output nonesense so I don't think the tokenization is being handled correctly when I do that. I also tried passing the base model as the tokenizer and the gpt2 tokenizer but they all output nosense.
This is because `ignore_mismatched_sizes` argument, when this argument is `True` it will ignore the weights from the checkpoint that do not have the same shape as the ones inside the model and leave the randomly initialized weights. If you have already fine-tuned a model, then the simplest solution, for now, would be to change the `vocab_size` in `config.json` manually.<|||||>@calderma How exactly are you finetuning? Can you share a complete example somehow?
<|||||>@calderma Can you share what machine spec are you using for finetuning? It would be really helpful.<|||||>@working12 the finetuning I'm doing is just what I pasted in the issue. run_clm.py with the arguments I listed. For machine specs if you are asking about what kind of GPUs I'm using etc. I'm using a machine with 8x 32g GPUs. In order to fit it and train I had to use deepspeed either zero2 or zero3 and then could only do batch size 1.<|||||>@patil-suraj what's the status of this issue? Does your fix work for fine-tune command:
deepspeed --master_port 29603 --include localhost:3 run_clm.py \
--deepspeed zero3.json \
--model_revision float32 \
--model_name_or_path EleutherAI/gpt-j-6B \
--do_train --train_file train.txt \
--do_eval --validation_file test.txt \
--evaluation_strategy no \
--logging_strategy steps \
--logging_steps $logging_steps \
--save_strategy steps \
--save_steps $save_steps \
--overwrite_output_dir \
--output_dir output.$EXP \
--per_device_train_batch_size $BS \
--per_device_eval_batch_size $BS \
--gradient_accumulation_steps $AS \
--num_train_epochs $N > log.$EXP
I am blocked by the same issue when NOT use --fp16. I tried two fine-tune with the same parameters but
(1) one with "--model_revision float16 --fp16"
(2) the other with the command above, which uses "--model_revision float32" but no --fp16.
The output model from (1) can be loaded fine but (2) triggered size mismatch error. How do I fix the error? --fp16 fine tuning has so many OVERFLOW errors that I wanted to turn --fp16 off.
A bit more details.
config.json-->vocab_size: 50400 in (1) but 50257 in (2). I tried to manually change 50257 to 50400 in (2) but triggered other errors.
tokenizer.json: many <|extratoken_XXX|> in (1) but not in (2)<|||||>@patil-suraj @calderma @MantasLukauskas The issue is actually more fundamental. There seems to be a problem with the serialization of GPTJ. Even without fine-tuning, this issue is encountered if the embedding size of GPTJ is expanded. For example, if you run this code below you will also get a size mismatch error:
```
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
model.resize_token_embeddings(54001)
model.save_pretrained("model_dir")
model = GPTJForCausalLM.from_pretrained("model_dir")
RuntimeError: Error(s) in loading state_dict for GPTJForCausalLM:
size mismatch for lm_head.weight: copying a param with shape torch.Size([50400, 4096]) from checkpoint, the shape in current model is torch.Size([54001, 4096]).
size mismatch for lm_head.bias: copying a param with shape torch.Size([50400]) from checkpoint, the shape in current model is torch.Size([54001]).
```<|||||>I fixed this one by using "--model_revision main" instead of "--model_revision float32". It looks the previous PR fixed only "main" and "float16" but not for "float32".
> @patil-suraj what's the status of this issue? Does your fix work for fine-tune command:
>
> deepspeed --master_port 29603 --include localhost:3 run_clm.py --deepspeed zero3.json --model_revision float32 --model_name_or_path EleutherAI/gpt-j-6B --do_train --train_file train.txt --do_eval --validation_file test.txt --evaluation_strategy no --logging_strategy steps --logging_steps $logging_steps --save_strategy steps --save_steps $save_steps --overwrite_output_dir --output_dir output.$EXP --per_device_train_batch_size $BS --per_device_eval_batch_size $BS --gradient_accumulation_steps $AS --num_train_epochs $N > log.$EXP
>
> I am blocked by the same issue when NOT use --fp16. I tried two fine-tune with the same parameters but
>
> (1) one with "--model_revision float16 --fp16" (2) the other with the command above, which uses "--model_revision float32" but no --fp16.
>
> The output model from (1) can be loaded fine but (2) triggered size mismatch error. How do I fix the error? --fp16 fine tuning has so many OVERFLOW errors that I wanted to turn --fp16 off.
>
> A bit more details.
>
> config.json-->vocab_size: 50400 in (1) but 50257 in (2). I tried to manually change 50257 to 50400 in (2) but triggered other errors. tokenizer.json: many <|extratoken_XXX|> in (1) but not in (2)
<|||||>@dunalduck0 Oh great I'll try that.<|||||>@dunalduck0 the `main` branch has the fp32 weights which should be used instead of the `float32` branch.
Also, there was another issue with resizing embeddings, #14190 should fix that.<|||||>Thank you @patil-suraj <|||||>This is fixed by #14190<|||||>@alexorona I've got the same problem as you. After resizing, I cannot load the original model anymore due to size mismatch. I found this is due to resize_token_embeddings.() changing config.vocal_size. If I reset config.vocab_size to 54000 before loading, it works fine.
> @patil-suraj @calderma @MantasLukauskas The issue is actually more fundamental. There seems to be a problem with the serialization of GPTJ. Even without fine-tuning, this issue is encountered if the embedding size of GPTJ is expanded. For example, if you run this code below you will also get a size mismatch error:
>
> ```
> from transformers import GPTJForCausalLM
> model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
> model.resize_token_embeddings(54001)
> model.save_pretrained("model_dir")
> model = GPTJForCausalLM.from_pretrained("model_dir")
> RuntimeError: Error(s) in loading state_dict for GPTJForCausalLM:
> size mismatch for lm_head.weight: copying a param with shape torch.Size([50400, 4096]) from checkpoint, the shape in current model is torch.Size([54001, 4096]).
> size mismatch for lm_head.bias: copying a param with shape torch.Size([50400]) from checkpoint, the shape in current model is torch.Size([54001]).
> ```
|
transformers | 13,580 | closed | Trainer error when finetuning Pegasus - cannot import name 'amp' from 'apex' | I get this error when i try to finetune Pegasus.
`ImportError: cannot import name 'amp' from 'apex' (unknown location) `
Is it possible to pun trainer without apex installed? I tried to specify, fp16=False in agruments but i still get this error.
the finetuning code is here - https://gist.github.com/jiahao87/50cec29725824da7ff6dd9314b53c4b3
Thanks
| 09-15-2021 16:12:25 | 09-15-2021 16:12:25 | I disabled it in trainer.py and now it works, but I do get memory error unless i freeze encoder.
if is_apex_available():
pass
#from apex import amp
|
transformers | 13,579 | closed | Initial support for symbolic tracing with torch.fx allowing dynamic axes | # What does this PR do?
This PR enables to symbolic trace models without having to specify a fixed batch size and / or sequence length.
To specify that an axis should not have a fixed shape, the value should be set to -1:
```python
traced = symbolic_trace(
model,
input_names=["input_ids", "attention_mask", "token_type_ids"],
batch_size=-1, # means that batch size should be dynamic
sequence_length=-1 # means that the sequence length should be dynamic
)
```
Currently, only the following models support dynamic axes:
- Albert
- Bert
- DistilBert
- MobileBert
- MegatronBert
- Electra
Traced models with dynamic axes cannot be retraced out of the box, to solve this 2 functions are provided:
- `prepare_for_retracing`: takes a traced model as input and outputs a model that can be retraced and some information that is needed to get back to dynamic axes
- `restore_after_retracing`: takes a retraced model and the information returned by `prepare_for_retracing` and set the dynamic axes back
Example:
```python
prepared, attributes = prepare_for_retracing(traced)
retraced = some_retracing_func(prepared)
final_model = restore_after_retracing(retraced, attributes)
```
To make things less tedious, `retrace_graph_with` is provided: it takes a traced model, and either a Tracer or a tracing function and performs the retracing:
```python
final_model = retrace_graph_with(traced, func=some_tracing_func)
```
Being able to retrace a model is important because that is how quantization can be done for instance.
Finally, this PR also provides sanity checks that validate that symbolic tracing is available for the model provided to `symbolic_trace`, and that it can be traced with dynamic axes if the user asks for it. | 09-15-2021 16:00:32 | 09-15-2021 16:00:32 | @sgugger What do you think of the way I've changed things regarding the config classes imports? |
transformers | 13,578 | closed | Example of exporting BartModel + Beam Search. | All new files for the example of exporting BartModel and Beam Search to ONNX file. This example could work with latest version of Huggingface transformers and required no changes to the model code. | 09-15-2021 15:32:34 | 09-15-2021 15:32:34 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Is this still going ahead in another pull request?<|||||>> Is this still going ahead in another pull request?
I should close this draft several days ago since another official one has been merged already.<|||||>Oh okay. Does that PR include beam search in the generate method?<|||||>> Oh okay. Does that PR include beam search in the generate method?
No, we didn't update generate method.<|||||>Ah I see. Thanks for the info! |
transformers | 13,577 | closed | XLMR tokenizer is fully picklable | # What does this PR do?
This addresses the issue here https://github.com/huggingface/transformers/issues/13200 to summarize:
- unpickling was dependant on what was on disk
the tokenizer is now unpickled only with the serialised proto.
This is needed if you want to write a pyspark udf which tokenizes a column, as the tokenizer needs to be pickled and sent to other nodes.
# Who can help
@LysandreJik | 09-15-2021 11:28:22 | 09-15-2021 11:28:22 | > This looks good to me - do you think you could implement a test in `tests/test_tokenization_xlm_roberta.py`?
done |
transformers | 13,576 | closed | Custom GPT2 Model won't load after training | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.2
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
## Information
Model I am using GPT2PretrainedModel.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## The Problem
I was able to train my customly build model but I am not able to load it with the `from_pretrained()` function. BTW I don't save the model manually if that is important. The saving is done by the Huggingface-Trainer.
The Error message:
```
model = CustomGPTModel.from_pretrained("results/checkpoint-19065", config=config)
File "/home/flo/PycharmProjects/EET2/venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1325, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
TypeError: __init__() missing 1 required positional argument: 'config'
```
I load the model like this:
```python
config = AutoConfig.from_pretrained("results/checkpoint-19065")
model = CustomGPTModel.from_pretrained("dbmdz/german-gpt2", config=config)
# custom = CustomGPTModel(model=model, config=config)
training_args = TrainingArguments(
output_dir='./results', # output directory
per_device_train_batch_size=1, # batch size per device during training
per_device_eval_batch_size=1, # batch size for evaluation
logging_dir='./logs/event/', # directory for storing logs
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
# model=custom, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
compute_metrics=compute_everything,
)
trainer.predict(test_dataset=test_dataset)
```
As you can tell from the commented code, I tried a lot of different approaches to no avail.
Other approaches I tried:
```
config = AutoConfig.from_pretrained("results/checkpoint-19065")
model = CustomGPTModel.from_pretrained("results/checkpoint-19065", config=config)
# or
config = AutoConfig.from_pretrained("results/checkpoint-19065")
model = CustomGPTModel.from_pretrained("results/checkpoint-19065")
```
Anyway the question is _How do I load my custom model?_
I think it is because of the way I initialize the CustomGPTModel (see below).
## The Task / More Information on what I am Doing
I am training the "dbmdz/german-gpt2" on a multilabel-classification task. For this I had to create my own model by subclassing the GPT2PretrainedModel. This is what the model looks like:
```python
class CustomGPTModel(GPT2PreTrainedModel):
def __init__(self, model, config):
super(CustomGPTModel, self).__init__(config)
self.num_labels = config.num_labels
self.init_weights()
### Architecture:
self.transformer = model
self.linear1 = nn.Linear(config.n_embd, 256)
self.score = nn.Linear(256, self.num_labels, bias=False)
self.dropout = nn.Dropout(p=0.2)
self.sig = nn.Sigmoid()
self.relu = nn.ReLU()
# Model parallel
self.model_parallel = False
self.device_map = None
def forward(self, input_ids=None, past_key_values=None, attention_mask=None,
token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None,
labels=None, use_cache=None, output_attentions=None, output_hidden_states=None,
return_dict=None, ):
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
transformer_outputs = self.transformer(
input_ids,
past_key_values=past_key_values,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
hidden_states = transformer_outputs[0] # call model
hdn_2 = self.linear1(hidden_states) # first linear
logits = self.score(self.dropout(self.relu(hdn_2))) # apply activation/dropout and final layer
if input_ids is not None:
batch_size, sequence_length = input_ids.shape[:2]
else:
batch_size, sequence_length = inputs_embeds.shape[:2]
assert (
self.config.pad_token_id is not None or batch_size == 1
), "Cannot handle batch sizes > 1 if no padding token is defined."
if self.config.pad_token_id is None:
sequence_lengths = -1
else:
if input_ids is not None:
sequence_lengths = torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1
pooled_logits = logits[range(batch_size), sequence_lengths]
loss = None
if labels is not None:
loss_fct = BCEWithLogitsLoss()
loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1, self.num_labels))
return (loss, pooled_logits)
else:
return logits
```
Here I initialize the model for training:
```python
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=10, # total number of training epochs
per_device_train_batch_size=1, # batch size per device during training
per_device_eval_batch_size=1, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs/event/', # directory for storing logs
logging_steps=1000,
load_best_model_at_end=True,
evaluation_strategy="epoch", # Evaluation is done (and logged) every eval_steps
save_strategy="epoch",
# logging_first_step = True,
do_eval=True,
)
trainer = Trainer(
model=custom_gpt2, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset, # evaluation dataset
compute_metrics=compute_everything,
callbacks=[EarlyStoppingCallback(early_stopping_patience=3)],
)
trainer.train()
```
## Expected behavior
The model should get loaded as expected.
I tried to fix it for two days now so I thought creating an issue is the last straw. Hopefully someone can explain what I am doing wrong :sweat_smile: If someone needs more information please tell me!
@sgugger | 09-15-2021 11:00:57 | 09-15-2021 11:00:57 | As replied on the forum, if you want to use `from_pretrained`, you need to make your `CustomGPTModel` only take a `config` during its init, like regular `PreTrainedModel` in the transformers library.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,575 | closed | Q About BART bart-large-cnn | I'm using this BART tokenizer code
```python
from transformers import BartForConditionalGeneration, BartTokenizer
model_name = "facebook/bart-large-cnn"
tokenizer = BartTokenizer.from_pretrained(model_name)
```
I wanna read the explanation of bart-large-cnn.
I guess this BPE tokenization (the paper said it uses BPE same as GPT-2) is trained on CNN/DM (Hermann et al., 2015). is that right?
Q1. Which dataset did the above tokenization use?
Q2 If there is a documentation about 'bart-large-cnn, bart-large-mnli, bart-large-xsum, bart-eli5, and bart-base', could you leave the link.
Thanks in advance. | 09-15-2021 10:10:12 | 09-15-2021 10:10:12 | Hi there!
Thank you for opening the issue. Please use the [forum](https://discuss.huggingface.co/) for such general questions. Issues are used to report bugs or feature requests.<|||||>So sorry I didn't know the forum exists |
transformers | 13,574 | closed | Add cpu distributed fine-tuning support for transformers Trainer API | Signed-off-by: Ding, Ke <[email protected]>
# What does this PR do?
This PR adds cpu distributed fine-tuning/training support for transformers Trainer API, and it supports both MPI and oneCCL backend for CPU platform.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@mfuntowicz
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-15-2021 04:34:46 | 09-15-2021 04:34:46 | > LGTM! Thanks @kding1.
>
> Should we add something to the documentation to highlight Distributed CPU feature in trainer?
Yes we should. where/which section would you recommend? We can do it separately. |
transformers | 13,573 | closed | Add Mistral GPT-2 Stability Tweaks | Following the release of [Mistral-v1](https://github.com/stanford-crfm/mistral), we are pushing a *Draft PR* with the stability fixes we made for training GPT-2 models directly to the base GPT-2 class definition (ensuring backwards compatibility).
This is in line with the following issues:
- https://github.com/stanford-crfm/mistral/issues/86: Enabling sharing Mistral Checkpoints via HF Hub (@osanseviero)
- https://github.com/huggingface/transformers/issues/13463: Upcasting Scaled Dot-Product Attention + Layerwise Scaling for stability (@lvwerra)
Concretely we implement:
- Weight initialization from the original GPT-2 paper (by default, shouldn't affect folks unless they are training GPT-2 models from scratch)
- Layer-wise scaling in scaled dot-product attention (optional flag; necessary for running/loading Mistral GPT-2 Models)
- Scaled Dot-Product Attention Reordering (scale before dot-product) & FP32 Upcasting when training in Mixed Precision (optional flag; only necessary for training new Mistral/other GPT-2 models).
This is a Draft PR to aid in @lvwerra and @thomwolf's work training GPT-2 models stably; we plan on implementing tests (please let us know potential pain points), adding documentation, and will act on any other feedback you have 🙂.
CC Mistral team: @lorr1, @J38, @santhnm2
CC others at HF + GPT-2 Model reviewers: @stas00, @patrickvonplaten, @LysandreJik
Resolves #13463 | 09-15-2021 04:32:03 | 09-15-2021 04:32:03 | @lvwerra did us the courtesy of implementing many of the changes - most of the comments have been resolved!<|||||>Question for @LysandreJik @patrickvonplaten; what tests should we add for these changes, and what documentation should we add? I can take an initial pass later today.
@lvwerra - this should be good to start your GPT-2 training runs.
Separately; do we need to implement similar fixes for TF and Flax? I'm not as familiar with either of those libraries/those GPT implementations...<|||||>Thank you for working on this @siddk. After seeing the code, I agree that this can indeed be merged into GPT-2 rather than by creating a new class as I was suggesting a few weeks ago.
Regarding the documentation, I would put a comment about this in the "Tips" of GPT-2, which can be found [here](https://github.com/huggingface/transformers/blame/master/docs/source/model_doc/gpt2.rst#L32).
Regarding tests, I would aim to test added features:
> - Weight initialization from the original GPT-2 paper (by default, shouldn't affect folks unless they are training GPT-2 models from scratch)
I would check that a newly initialized GPT-2 with a specific initializer range results in a change of weights. If this is too hard to implement, I don't think that's too much of an issue to not test it.
> Layer-wise scaling in scaled dot-product attention (optional flag; necessary for running/loading Mistral GPT-2 Models)
I would check that the scaling is indeed applied; should be able to leverage the `output_hidden_states` flag for that.
Let us know if we can help with anything or if you had other tests in mind.<|||||>Thanks so much @LysandreJik - we'll implement the tests you suggest (CC @j38 @santhnm2). I noticed that we're getting this weird failed test with building documentation; any idea how to get to the bottom of it, the Sphinx error message isn't too helpful...
@lvwerra - hopefully this is enough to get started!<|||||>Hi everyone, I just pushed a first pass version of tests that just ensure passing the `scale_attn_by_layer_idx` and `reorder_and_upcast_attn` flags don't result in a crash. However, we were wondering it was worth doing more robust tests (for example actually verifying the attention weights changed as a result of upcasting), and if so, how we would go about doing this considering the new functions are pretty much black boxes from the point of view of the tests. Thanks for your help!<|||||>Thanks @stas00! @santhnm2 - can you make the suggested changes (and fix the "docs" test that @LysandreJik suggested a fix for!).
I'll add some notes to the "Tips" section, then we can make this an official PR - thanks all for the help.<|||||>Hi all - @santhnm2 has resolved the remaining tests and docs, and all checks currently pass!
Excited to get this merged!<|||||>I added a test for the new weight initializations that will check the empirical std and mean are within a reasonable range of the expected value. Please let me know if you'd like any modifications.<|||||>I just added an assert check as well that the upcasting is actually happening.<|||||>Thanks @patrickvonplaten @LysandreJik @sgugger - Merge conflicts should be addressed now!<|||||>Using a model trained with `reorder_and_upcast_attn=True` to generate text with `generate` function results in the following error:
```python
from transformers import GPT2LMHeadModel, AutoTokenizer, AutoConfig, pipeline
config = AutoConfig.from_pretrained('gpt2', reorder_and_upcast_attn=True)
model = GPT2LMHeadModel(config)
tokenizer = AutoTokenizer.from_pretrained('gpt2')
inputs = tokenizer('test', return_tensors='pt')
model.generate(**inputs)
```
```python
~/miniconda3/envs/codeparrot/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py in _upcast_and_reordered_attn(self, query, key, value, attention_mask, head_mask)
244 with autocast(enabled=False):
245 q, k = query.reshape(-1, seq_len, dk), key.transpose(-1, -2).reshape(-1, dk, seq_len)
--> 246 attn_weights = torch.baddbmm(attn_weights, q.float(), k.float(), beta=0, alpha=scale_factor)
247 attn_weights = attn_weights.reshape(bsz, num_heads, seq_len, seq_len)
248 else:
RuntimeError: Expected batch2_sizes[0] == bs && batch2_sizes[1] == contraction_size to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
The reason is that when generating the previous keys and values are stacked to the current ones while the queries are not extended.
https://github.com/huggingface/transformers/blob/babb7f40fb9087ad31cd38bb12117427a614a173/src/transformers/models/gpt2/modeling_gpt2.py#L322-L325
Thus the `q` tensor has a different shape than `k` which causes `torch.baddbmm` to throw an error. How should this be handled?
One way would be to add an extra condition to the if-statement:
```python
if self.reorder_and_upcast_attn and (query.shape==key.shape):
attn_output, attn_weights = self._upcast_and_reordered_attn(query, key, value, attention_mask, head_mask)
```
Or should the user be explicitly be warned? Another option is the repeat the query tensor along the sequence dimension:
```python
if q.shape != k.shape:
q = q.repeat(1, 1, k.shape[2], 1)
```
But I have to think a bit harder and test that this is actually equivalent for the last option.<|||||>If you’re just using a model (for inference/generation) you should just be able to turn off that flag (as long as scaling by layer idx is still True). They should be equivalent.
This does conflict with typical use of `from_pretrained` though — any suggestions @sgugger @lvwerra?<|||||>Just wanted to follow-up - is there a plan to merge this PR?<|||||>Hi @siddk, the plan is definitely to merge this PR. The issue @lvwerra mentions seems to be real, however: if the model cannot be used for anything else than training with `reorder_and_upcast_attn=True`, it should be mentioned in the documentation and should fail with an explicit error message.
Would the fix offered by @lvwerra regarding the mismatched `q` and `k` tensor sizes work for you?<|||||>Sorry @LysandreJik @lvwerra I totally missed the last response.
That issue makes sense, I think the best fix is the "if" condition @lvwerra added (only upcast/reorder if q and k are the same shape). I don't think repeating the query actually keeps the same semantics - might actually give different results.
@lvwerra would it be easier if I made that change?<|||||>I believe Leandro is off for a few days, so if you have time to make that change it would be very welcome @siddk!<|||||>@LysandreJik - actually found a better, more general fix. Verified that the above snippet from @lvwerra works as expected - should be good to go.<|||||>@LysandreJik Just added tests with the `generate` method, let me know if these look alright to you.<|||||>These looks good, thank you @santhnm2. Merging! |
transformers | 13,572 | closed | Add SigOpt HPO to transformers trainer api | # What does this PR do?
This PR adds SigOpt (https://sigopt.com/) hyper-parameter optimization support to transformer trainer api.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@mfuntowicz
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-15-2021 04:20:17 | 09-15-2021 04:20:17 | @LysandreJik this is because the sigopt token is not set properly in your test environment. Below are the steps:
- signup on sigopt HPO at https://app.sigopt.com/signup, it's free for everyone for the basic plan.
- login to your account, get your sigopt api token at your portal, and expose it in your environment variable with `export SIGOPT_API_TOKEN=your_sigopt_token_string`
- in your account portal, create a project named `huggingface`.
Once the training starts with sigopt HPO, you will get the real-time dashboard like this: https://app.sigopt.com/guest?guest_token=NJTRFVEPPGWTKZPLISXVWUXPIOFATREJOVAILZQBPSLOECMU
I have run the unit test with the refactor patch i just submitted and it works:
`$ pytest tests/test_trainer.py::TrainerHyperParameterSigOptIntegrationTest::test_hyperparameter_search`
Please let me know if there is any issue in your side.<|||||>rebase to fix conflict on dependency package. |
transformers | 13,571 | closed | single_word option of new tokens are disabled by save_pretrained when we save and reload a tokenizer twice | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
I set up environment as follows:
```bash
conda create -n test python=3.9
conda activate test
pip install transformers
# I got transformers 4.10.2 and tokenizers 0.10.3
```
Other details:
- `transformers` version: 4.10.2
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
* tokenizer: @LysandreJik
## Information
`single_word` option of new tokens are disabled by `save_pretrained` when we save and reload a tokenizer twice.
## To reproduce
```python
from transformers import AutoTokenizer
from tokenizers import AddedToken
# Load tokenizer and add tokens.
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
new_vocab = [AddedToken("some_word", single_word=True), AddedToken("some_words", single_word=True)]
tokenizer.add_tokens(new_vocab)
def check_tokenizer():
print(tokenizer.convert_ids_to_tokens(tokenizer.encode("some_words", add_special_tokens=False)))
check_tokenizer()
# Save and reload tokenizer
tokenizer.save_pretrained("first_save")
tokenizer = AutoTokenizer.from_pretrained("./first_save")
check_tokenizer()
# Save and reload tokenizer again
tokenizer.save_pretrained("second_save")
tokenizer = AutoTokenizer.from_pretrained("./second_save")
check_tokenizer()
```
The above code outputs:
```
['some_words']
['some_words']
['some_word', 's']
```
`first_save/tokenizer.json` includes the following entry:
```
{"id":28996,"special":false,"content":"some_word","single_word":true,"lstrip":false,"rstrip":false,"normalized":true},{"id":28997,"special":false,"content":"some_words","single_word":true,"lstrip":false,"rstrip":false,"normalized":true}
```
However, in `second_save/tokenizer.json`, the above entry is changed. Note that a value for "single_word" is changed from true to false.
```
{"id":28996,"special":false,"content":"some_word","single_word":false,"lstrip":false,"rstrip":false,"normalized":true},{"id":28997,"special":false,"content":"some_words","single_word":false,"lstrip":false,"rstrip":false,"normalized":true}
```
## Expected behavior
```
['some_words']
['some_words']
['some_words']
```
| 09-15-2021 02:47:17 | 09-15-2021 02:47:17 | Ah that's interesting! Might be of interest to @SaulLu <|||||>Edited:
The problem seems to only occur in `FastTokenizer`, the slow version works fine,
So the problem I mentioned below has no direct relation to this issue.
@kosuke1701 You can try using your original method but add `use_fast=False` argument to `from_pretrained`, like so:
```
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False)
```
The output is also correct:
```
['some_words']
['some_words']
['some_words']
```
---
This is an issue related to #13483 and #13489. @LysandreJik @SaulLu
The `add_tokens` method just simply converts `AddedToken` object into `str` and then stores it in `tokenizer.unique_no_split_tokens`, while the `save_pretrained` method only retrieves added tokens in `tokenizer.special_tokens_map_extended` so these added tokens will never be saved.
Besides, even if `save_pretrained` method also retrieves tokens from `tokenizer.unique_no_split_tokens`, attributes that `AddedToken` provides such as `single_word` still lose because tokens have been already converted to `str` at the first step of `add_tokens`.
To OP, the following code would work:
```
import torch
import transformers
from transformers import AutoTokenizer, BertTokenizer, BertModel
from tokenizers import AddedToken
print('PyTorch version:', torch.__version__, ' transformers version:', transformers.__version__)
# Load tokenizer and add tokens.
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
new_vocab = [AddedToken("some_word", single_word=True), AddedToken("some_words", single_word=True)]
tokenizer.add_special_tokens({"additional_special_tokens":new_vocab})
def check_tokenizer():
print(tokenizer.convert_ids_to_tokens(tokenizer.encode("some_words", add_special_tokens=False)))
check_tokenizer()
# Save and reload tokenizer
tokenizer.save_pretrained("first_save")
tokenizer = AutoTokenizer.from_pretrained("./first_save")
check_tokenizer()
# Save and reload tokenizer again
tokenizer.save_pretrained("second_save")
tokenizer = AutoTokenizer.from_pretrained("./second_save")
check_tokenizer()
```
Output:
```
['some_words']
['some_words']
['some_words']
```<|||||>@qqaatw Your code works for me! Thanks a lot :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>It is indeed interesting!
And thank you very much for all the analysis and the fix @qqaatw! I'm putting it on my todo list to try to find something that would avoid this undesirable behavior. |
transformers | 13,570 | closed | BART | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 09-15-2021 02:38:17 | 09-15-2021 02:38:17 | |
transformers | 13,569 | closed | Log metrics during training | # 🚀 Feature request
HF Transformers supports custom metrics at evaluation, but not during training. Many metrics are trivial / fast to collect during training time, so it would be nice to collect them too (e.g. BERT accuracy).
## Motivation
Collecting these metrics during training helps answer a number of research questions including overfitting, model performance, etc. Integration at this level can also pave the way toward collecting sequence-level information that can be logged using tools like W&B Tables.
## Your contribution
I'm happy to submit a PR. Starting this conversation early to see if there is interest in this feature / understand if there's a reason why this isn't already supported in HF.
| 09-14-2021 22:04:55 | 09-14-2021 22:04:55 | I am not sure what you are saying. Metrics are computed during training at each evaluation step/epoch, depending on the strategy you picked.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,568 | closed | Marian Encoder's last hidden states from MarianMT and TFMarianMTModel don't match | ## Environment info
- `transformers` version: 4.10.2
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Additional Environment Info: Tensorflow Version - 2.6.0
### Who can help
@patil-suraj, @Rocketknight1
## Information
Model I am using (Bert, XLNet ...): MarianMT and TFMarianMT Models ( [pretrained opus-mt en-hi model](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi/tree/main))
The problem arises when using:
* [ ] my own modified scripts: [Reproducible Error Script](https://colab.research.google.com/drive/1mIEip64duk6upCerF8J1HOoVyN4Dz7sF?usp=sharing)
The tasks I am working on is:
* [ ] my own task or dataset: MT, Not Applicable
## To reproduce
Steps to reproduce the behavior:
1. Run the provided error script - It can be noticed that the encoder's last hidden state is not the same for MarianMT and TFMarianMT Models.
## Expected behavior
In my best understanding, The hidden states of the last layer of the encoder should be consistent across both variants ( TFMarianMT and MarianMT ). The variance in the encoder's hidden states will carry forward to introduce difference in the decoded output's generation, potentially leading to different outputs from the same pretrained model.
Thanks in Advance! | 09-14-2021 19:57:25 | 09-14-2021 19:57:25 | Hi! I ran your script, but I noticed that when I re-ran the code, even with the same framework, I got different outputs. When I set `training=False` for the TF model call and called `model.eval()` for PyTorch this stopped, so I'm guessing it was caused by Dropout layers or something similar.
Even after I did this, though, the output of the two models remained different, so this may indeed be a bug. We're investigating!<|||||>> Hi! I ran your script, but I noticed that when I re-ran the code, even with the same framework, I got different outputs. When I set `training=False` for the TF model call and called `model.eval()` for PyTorch this stopped, so I'm guessing it was caused by Dropout layers or something similar.
>
> Even after I did this, though, the output of the two models remained different, so this may indeed be a bug. We're investigating!
Thanks for the quick response. I debugged a little myself by looking into generation_tf_utils.py and figured that I got the pytorch equivalent outputs by calling:
encoder = model.get_encoder()
encoded_sequence = encoder(input_ids)
where model is an isntance of the TFMarianMTModel. The key inconsistency then appears to be the difference in this output and the encoder.last_hidden_state. It might also be useful to point this out in the documentation since, prima facie, the attribute does not seem to be returning an expected value.
Edit: Once I replaced the encoded_sequence ( in the file ) with the encoder_sequence returned by the new method I was also able to replicate the performance i.e get a nearly identical output from that of the one generated by the pytorch model. <|||||>We believe this may be linked to another issue: https://github.com/huggingface/transformers/issues/12647
We're now investigating both - hopefully we can resolve both soon.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,567 | closed | CANINE model in huggingface transformers performs worse than mBERT? | I've been trying to use the newly added CANINE model from Google for multilingual NER task. Specifically, I finetune the CanineForTokenClassification model on WikiAnn English NER data. However, I got much worse performance on English NER compared to mBERT. Here are the rough performance numbers:
NER dev F1 on en:
canine: 79
mbert: 85
POS dev F1 on en:
canine: 90
mbert: 95
I also evaluated the finetuned model on the test data from a couple languages other than English, and they all seem worse than mBERT. I tried learning rate of 2e-5, 5e-5, and trained for 10, 20 epochs, but they don't seem to affect the results too much.
I'm not sure if anyone had the same result with me or if there are something wrong I did when using CANINE on huggingface transformers?
@stefan-it @NielsRogge
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.0.dev0
- Platform: Linux-4.15.0-133-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0 (True)
| 09-14-2021 17:34:54 | 09-14-2021 17:34:54 | Hi @cindyxinyiwang ,
I ran a one-run experiment with Flair (I recently wrote an embedding implementation for CANINE to use it with Flair) and the results on XTREME/WikiANN Turkish split are:
| Model | Dev F1 | Test F1
| ----------------- | ------ | -------
| mBERT (cased) | 0.9358 | 0.9333
| `google/canine-s` | 0.9051 | 0.9044
| `google/canine-c` | 0.9043 | 0.9008
| `google/byt5-base` | 0.9255 | 0.9245
So there's a current performance diff of -3.07%. I'll run another experiment with the `CANINE-C` model and report the results here!<|||||>I'm using:
```python
encoding = self.tokenizer([tokenized_string], padding="longest", return_tensors="pt")
hidden_states = self.model(**encoding).last_hidden_state
```
And the first - let's say character - embedding (analog. to first subword embedding).<|||||>I see thanks so much for sharing the results! My run of canine-c has similar results to canine-s. I think the huggingface implementation also uses the first character of each word. I thought it might be a sequence tagging issue. Then I additionally tried PAWS-X classification but it's also worse than mBERT on all languages...Not sure if there is anything specific we are missing<|||||>Hey @cindyxinyiwang what Pos Tagging dataset did you use btw. for English :thinking: E.g. in the datasets library there are a few English subsets, see [here](https://huggingface.co/datasets/viewer/?dataset=universal_dependencies).<|||||>Hey! I think i probably used all the concatenated English training data in UD 2.5. I used the preprocessing script from XTREME instead of directly from the datasets library<|||||>Anyways thanks for confirming with my observation. Maybe there is something wrong with the huggingface implementation, or perhaps there are some intricate fine-tuning hyperparameters...<|||||>Thanks for trying out CANINE, I don't think I've made a mistake in my implementation, the model outputs the same tensors as the original one on the same data. Also, given that one can obtain an F1 score of > 0.90, I don't think it's a modeling issue. Note that in the original paper, the authors did not try out CANINE on NER problems, only QA.
Also cc'ing @dhgarrette, the original author.<|||||>I see that makes sense. Thanks so much for implementing the model so we can use it very easily! |
transformers | 13,566 | closed | Fix test_fetcher when setup is updated | # What does this PR do?
When we update a dependency, it's better to run all tests to make sure we catch any failure linked to that new dep. This PR adapts the `test_fectcher` to make sure this is the case. | 09-14-2021 17:26:01 | 09-14-2021 17:26:01 | |
transformers | 13,565 | closed | [Flax] Fixes typo in Bart based Flax Models | ## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj @patrickvonplaten | 09-14-2021 16:29:23 | 09-14-2021 16:29:23 | |
transformers | 13,564 | closed | upgrade sentencepiece version | # What does this PR do?
Fixes #13563
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-14-2021 15:39:53 | 09-14-2021 15:39:53 | Just double-checked, it's a Python 3.9 problem for the previous version. No problem with upgrading as long as all tests pass, but we won't know that until I fix a bug in the tests_fetcher utils, so stay tuned :-)<|||||>In the meantime, you can run `make style` on your branch to remove the error in code quality.<|||||>Can you rebase to integrate #13566 in your branch? It should then run all tests so we can double-check this doesn't break anything.<|||||>> Can you rebase to integrate #13566 in your branch? It should then run all tests so we can double-check this doesn't break anything.
Done 👍🏾 <|||||>Discussing a bit more with Lysandre, I think we actually want >=0.0.91 and != 0.0.92 (which was the problematic version that made us pin sentencepiece). Do you mind adapting the PR? Thanks!<|||||>You'll need to run `make style` again to update the dependency table :-)<|||||>> You'll need to run `make style` again to update the dependency table :-)
Oh ok, I was wondering where it comes from...<|||||>With a fressh venv with python3 on debian 11, here what a fresh pip install -e '.[dev]' results:
```
Successfully installed APScheduler-3.7.0 Flask-1.1.4 GitPython-3.1.18 Jinja2-2.11.3 Mako-1.1.5 MarkupSafe-1.1.1 Pillow-8.3.2 PrettyTable-2.2.0 Pygments-2.10.0 absl-py-0.13.0 aiohttp-3.7.4.post0 alabaster-0.7.12 alembic-1.7.1 appdirs-1.4.4 arrow-1.1.1 astunparse-1.6.3 async-timeout-3.0.1 attrs-21.2.0 autopage-0.4.0 babel-2.9.1 binaryornot-0.4.4 black-21.4b0 brotli-1.0.9 cachetools-4.2.2 cffi-1.14.6 chardet-4.0.0 chex-0.0.8 clang-5.0 click-7.1.2 cliff-3.9.0 cmaes-0.8.2 cmd2-2.2.0 codecarbon-1.2.0 colorama-0.4.4 colorlog-6.4.1 commonmark-0.9.1 cookiecutter-1.7.2 cycler-0.10.0 dash-2.0.0 dash-bootstrap-components-0.13.0 dash-core-components-2.0.0 dash-html-components-2.0.0 dash-table-5.0.0 datasets-1.12.0 dill-0.3.4 dm-tree-0.1.6 docutils-0.16 execnet-1.9.0 faiss-cpu-1.7.1.post2 fire-0.4.0 flake8-3.9.2 flask-compress-1.10.1 flatbuffers-1.12 flax-0.3.4 fsspec-2021.8.1 fugashi-1.1.1 gast-0.4.0 gitdb-4.0.7 google-auth-1.35.0 google-auth-oauthlib-0.4.6 google-pasta-0.2.0 greenlet-1.1.1 grpcio-1.40.0 h5py-3.1.0 imagesize-1.2.0 iniconfig-1.1.1 ipadic-1.0.0 isort-5.9.3 itsdangerous-1.1.0 jax-0.2.20 jaxlib-0.1.71 jinja2-time-0.2.0 keras-2.6.0 keras-preprocessing-1.1.2 keras2onnx-1.7.0 kiwisolver-1.3.2 markdown-3.3.4 matplotlib-3.4.3 mccabe-0.6.1 msgpack-1.0.2 multidict-5.1.0 multiprocess-0.70.12.2 mypy-extensions-0.4.3 nltk-3.6.2 numpy-1.19.5 oauthlib-3.1.1 onnx-1.10.1 onnxconverter-common-1.8.1 opt-einsum-3.3.0 optax-0.0.9 optuna-2.9.1 pandas-1.3.3 parameterized-0.8.1 pathspec-0.9.0 pbr-5.6.0 plac-1.3.3 plotly-5.3.1 pluggy-1.0.0 portalocker-2.0.0 poyo-0.5.0 protobuf-3.17.3 psutil-5.8.0 py-1.10.0 py-cpuinfo-8.0.0 pyarrow-5.0.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycodestyle-2.7.0 pycparser-2.20 pyflakes-2.3.1 pynvml-11.0.0 pyperclip-1.8.2 pytest-6.2.5 pytest-forked-1.3.0 pytest-timeout-1.4.2 pytest-xdist-2.3.0 python-dateutil-2.8.2 python-slugify-5.0.2 pytz-2021.1 ray-1.6.0 recommonmark-0.7.1 redis-3.5.3 requests-oauthlib-1.3.0 rouge-score-0.0.4 rsa-4.7.2 sacrebleu-1.5.1 scikit-learn-0.24.2 scipy-1.7.1 sentencepiece-0.1.96 six-1.15.0 smmap-4.0.0 snowballstemmer-2.1.0 soundfile-0.10.3.post1 sphinx-3.2.1 sphinx-copybutton-0.4.0 sphinx-intl-2.0.1 sphinx-markdown-tables-0.0.15 sphinx-rtd-theme-0.4.3 sphinxcontrib-applehelp-1.0.2 sphinxcontrib-devhelp-1.0.2 sphinxcontrib-htmlhelp-2.0.0 sphinxcontrib-jsmath-1.0.1 sphinxcontrib-qthelp-1.0.3 sphinxcontrib-serializinghtml-1.1.5 sphinxext-opengraph-0.4.1 sqlalchemy-1.4.23 stevedore-3.4.0 tabulate-0.8.9 tenacity-8.0.1 tensorboard-2.6.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.0 tensorboardX-2.4 tensorflow-2.6.0 tensorflow-estimator-2.6.0 termcolor-1.1.0 text-unidecode-1.3 threadpoolctl-2.2.0 timeout-decorator-0.5.0 timm-0.4.12 toml-0.10.2 toolz-0.11.1 torch-1.9.0 torchaudio-0.9.0 torchvision-0.10.0 transformers typing-extensions-3.7.4.3 tzlocal-2.1 unidic-1.0.3 unidic-lite-1.0.8 wasabi-0.8.2 wcwidth-0.2.5 werkzeug-1.0.1 wheel-0.37.0 wrapt-1.12.1 xxhash-2.0.2 yarl-1.6.3
```
|
transformers | 13,563 | closed | sentencepiece version need upgrade | ## Environment info
- `transformers` version: 4.11.0.dev0
- Platform: Debian Gnu/Linux
- Python version: 3.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
- setup: @LysandreJik may be ?
## Information
sentencepiece==0.1.91 does not exist anymore
pip install fails:
```
ERROR: Could not find a version that satisfies the requirement sentencepiece==0.1.91
ERROR: No matching distribution found for sentencepiece==0.1.91
```
## To reproduce
Steps to reproduce the behavior:
1. create a new venv
2. pip install -e .
## Expected behavior
- Need to bump to the latest but minor version change 0.1.96 for transformer's install to work
| 09-14-2021 15:32:49 | 09-14-2021 15:32:49 | It just worked fine on a new env. Maybe you need a `pip install --upgrade pip`?<|||||>> It just worked fine on a new env. Maybe you need a `pip install --upgrade pip`?
Yeah indeed, it worked for me too. |
transformers | 13,562 | closed | Layoutlm onnx support (Issue #13300) | # What does this PR do?
This PR extends ONNX support to LayoutLM as explained in https://huggingface.co/transformers/serialization.html?highlight=onnx#converting-an-onnx-model-using-the-transformers-onnx-package
Fixes Issue #13300
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge @mfuntowicz @LysandreJik | 09-14-2021 12:39:29 | 09-14-2021 12:39:29 | Looks almost ready, great work @nishprabhu!!
Thank you for your contribution.<|||||>Hi, any update about this? Any other changes required?
@LysandreJik @mfuntowicz @NielsRogge<|||||>LGTM! <|||||>This is making the LayoutLM ONNX test fail for the following reason:
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <transformers.models.layoutlm.configuration_layoutlm.LayoutLMOnnxConfig object at 0x7f344811ceb0>
tokenizer = PreTrainedTokenizerFast(name_or_path='microsoft/layoutlm-base-uncased', vocab_size=30522, model_max_len=512, is_fast=T...okens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'})
batch_size = -1, seq_length = -1, is_pair = False
framework = <TensorType.PYTORCH: 'pt'>
def generate_dummy_inputs(
self,
tokenizer: PreTrainedTokenizer,
batch_size: int = -1,
seq_length: int = -1,
is_pair: bool = False,
framework: Optional[TensorType] = None,
) -> Mapping[str, Any]:
"""
Generate inputs to provide to the ONNX exporter for the specific framework
Args:
tokenizer: The tokenizer associated with this model configuration
batch_size: The batch size (int) to export the model for (-1 means dynamic axis)
seq_length: The sequence length (int) to export the model for (-1 means dynamic axis)
is_pair: Indicate if the input is a pair (sentence 1, sentence 2)
framework: The framework (optional) the tokenizer will generate tensor for
Returns:
Mapping[str, Tensor] holding the kwargs to provide to the model's forward function
"""
input_dict = super().generate_dummy_inputs(tokenizer, batch_size, seq_length, is_pair, framework)
# Generate a dummy bbox
box = [48, 84, 73, 128]
if not framework == TensorType.PYTORCH:
raise NotImplementedError("Exporting LayoutLM to ONNX is currently only supported for PyTorch.")
if not is_torch_available():
raise ValueError("Cannot generate dummy inputs without PyTorch installed.")
import torch
> input_dict["bbox"] = torch.tensor(
[
[0] * 4,
*[box] * seq_length,
[self.max_2d_positions] * 4,
]
).tile(batch_size, 1, 1)
E RuntimeError: Trying to create tensor with negative dimension -1: [-1, 2, 4]
```
Could you take a look @nishprabhu @michaelbenayoun?
Will skip this test in the meantime.<|||||>I think we have to import the compute_effective_axis_dimension function from the onnx module and compute the batch_size and seq_length dimensions in the configuration_layoutlm.py file. We are currently using -1 as the value for both dimensions which is causing the error. @michaelbenayoun @LysandreJik
|
transformers | 13,561 | closed | m2m100 conversion failed from fairseq to hf format | When I try to convert m2m100 model from fairseq repository the script throws an error:
```
RuntimeError: Error(s) in loading state_dict for M2M100Model:
Missing key(s) in state_dict: "encoder.embed_positions.weights", "decoder.embed_positions.weights".
```
I manually checked the state dict and no positional encoding are present in the checkpoint. How did you convert this model to the one in hf format?
The problem arises when using:
* [script](https://github.com/harveenchadha/transformers/blob/master/src/transformers/models/m2m_100/convert_m2m100_original_checkpoint_to_pytorch.py)
* original [model](https://dl.fbaipublicfiles.com/m2m_100/418M_last_checkpoint.pt ) I am trying to convert:
## Environment info
- `transformers` version: 4.9.1
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): None
- fairseq version: 1.0.0a0+9589463
@patil-suraj @patrickvonplaten
Model I am using : M2M 100
| 09-14-2021 12:10:05 | 09-14-2021 12:10:05 | Hi @harveenchadha
M2M100 uses sinusoidal positional embeds, so it does not save those in the `state_dict`, I didn't get this error when I converted the models. Is this a new version?
And to fix this, you could either add those keys in `ignore_keys` list [here](https://github.com/harveenchadha/transformers/blob/b89a964d3f46bff56e7652e572c63c13b703b7d8/src/transformers/models/m2m_100/convert_m2m100_original_checkpoint_to_pytorch.py#L24) or pass `strict=False` to `model.load_state_dict`<|||||>Sorry I am a bit new to translation models so I didn't know about sinusoidal positional embeddings.
No it is the same model that was open sourced by facebook. When I do strict=false, I am able to get a final model. But the final files generated are config.json and pytorch_model.bin
I think there should be three other files:
1. tokenizer_config.json
2. vocab.json
3. special_tokens_map.json
How to generate these files? <|||||>the vocab files needs to be created using the fairseqs `dict.txt`. The conversion script does not convert the tokenizer.
I will try to post the tokenizer conversion scripts in few days.<|||||>That would be really helpful. Right now I have [published](https://huggingface.co/Harveenchadha/indictrans) the model but it cannot be used as the tokenizer is not ready.<|||||>If it uses the same M2M100 tokenizer, then you could use the tokenizer from https://huggingface.co/facebook/m2m100_418M
and upload those file to your repo.<|||||>I tried this but this didn't work. Actually the tokenizer is not the same as language support is different.<|||||>if it's the same model, then the tokenizer should also be the same, no? Did you create your tokenizer or did you add any tokens?<|||||>The model arch is same but its not the same model. Original m2m 100 has support for 100 languages but this model is specific for Indic languages and support 13 languages only as it was finetuned on Indic parallel corpus.
I am not the author of this model, so I am trying to understand the code structure.
[Here](https://github.com/AI4Bharat/indicTrans/blob/main/inference/engine.py) is the inference file.
Maybe I can load the tokenizer from here and model from HuggingFace to give it a try how this works.
<|||||>Aah, I see. The tokenizer looks different than the current m2m100 tokenizer i.e it uses different normalizers, preprocessors.
In this case, we could add a new tokenizer file for this model. In `Transformers` it's possible to just add a tokenizer, for example [`barthez`](https://github.com/huggingface/transformers/tree/master/src/transformers/models/barthez) uses the BART arch but the tokenizer is different, hence only tokenizer file is added, we could do something similar here and add `IndicTransTokenizer`<|||||>This sounds really good, can you please give me a head start like what methods I need to implement from the base library to achieve this? Any document will also be fine.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@harveenchadha I am also getting the same error loading original m2m100 checkpoint. Missing key(s) in state_dict: "encoder.embed_positions.weights", "decoder.embed_positions.weights"
How did you solve?<|||||>Hey @nikhiljaiswal !
M2M uses sinusoidal position embeds, so it's fine if those weights are missing.
To fix this, you could either add those keys in `ignore_keys` list [here](https://github.com/harveenchadha/transformers/blob/b89a964d3f46bff56e7652e572c63c13b703b7d8/src/transformers/models/m2m_100/convert_m2m100_original_checkpoint_to_pytorch.py#L24) or pass `strict=False` to `model.load_state_dict`<|||||>Thanks @patil-suraj for the response. It worked. I had another doubt. After converting, I got config.json & pytorch_model.bin files. Now to load the model, I am using
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("pytorch_model.bin")
I am facing error. Is not this the way to load the model?<|||||>you should pass the path to the directory containing the `config.json` and `pytorch_model.bin` files to the `from_pretrained` method.<|||||>@patil-suraj that worked, thanks. Now when I am trying to load the finetuned m2m model on my custom data, I am facing new error while converting the checkpoint
AttributeError: 'NoneType' object has no attribute 'encoder_layers'<|||||>@patil-suraj can you please help me with above problem? |
transformers | 13,560 | closed | add flax mbart in auto seq2seq lm | # What does this PR do?
Adds `FlaxMBartForConditionalGeneration` in `FLAX_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES` | 09-14-2021 10:30:36 | 09-14-2021 10:30:36 | |
transformers | 13,559 | closed | [Pretrained Model] Add resize_position_embeddings | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds a `resize_position_embeddings` function similar to `resize_token_embeddings`. In contrast to `resize_token_embeddings` the function will be added to the `PreTrainedModel` class **only** as an abstract function and should be implemented directly in the model specific class. It will raise a `NotImplementedError(...)` if not overwitten in the model-specific class.
The reason behind is this code design is that it's easier to read (all logic in the same modeling file) and allows for more flexibility which is necessary as there are quite some different kinds of position embeddings. For now the method is only implemented for Distilbert and Pegasus, mostly to enable use-cases as described in https://github.com/huggingface/transformers/issues/11344
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-14-2021 10:11:17 | 09-14-2021 10:11:17 | > I agree with the solution here, very clean! But from the first look, it seems it adds a bit of complexity.
>
> Also,
>
> * Should we always resize pos embeds in seq2seq scripts? IMO it's fine if we are using sinusoidal or non-learned pos embeds, but for learned position embeds the user should be made aware of this, as some embeddings will be randomly initialized and some will be trained, which might/mightnot affect training, so maybe add a flag?
> * I'm not sure if it's a good idea to reduce the size of learned embeddings, that essentially means throwing away learned weights. And, pos embeds don't need much memory compared to the rest of the model so why reduce the size?
1. See: https://github.com/huggingface/transformers/pull/13559#discussion_r708308465 , think I'm happy to go with both solutions -> let's also see what @stas00 thinks!
2. Note that the summarization script never reduces the position embeddings but only adds tokens if necessary. Think it doesn't hurt to have the possibility to reduce the position embeddings in general. |
transformers | 13,558 | closed | Internal links in README.md tables are broken | Hello, I found that the link in the model card is not working properly https://huggingface.co/uer/chinese_roberta_L-2_H-128

Other model cards also have the same problem https://huggingface.co/google/bert_uncased_L-2_H-128_A-2

| 09-14-2021 09:33:18 | 09-14-2021 09:33:18 | Can this problem be fixed?
It is inconvenient when the link is not working properly.<|||||>Hello, you should contact the author of the model, which seems to be part of the following GitHub organization: https://github.com/dbiir/UER-py<|||||>@LysandreJik Hello, I am the author of these models. These links worked normally before, and I did not make any changes. The same problem appeared in Google's model card. I think it might be a problem with the web page?<|||||>reporting this for @elishowk @Pierrci, might be a (new) issue with our markdown renderer?<|||||>Hi,
Thanks for reporting.
Seems like a bug on our side.
[Here's the same markdown rendered by markdownjs v2.1.3, and links are OK](https://marked.js.org/demo/?text=---%0Alanguage%3A%20Chinese%0Adatasets%3A%20CLUECorpusSmall%0Awidget%3A%20%0A-%20text%3A%20%22%E5%8C%97%E4%BA%AC%E6%98%AF%5BMASK%5D%E5%9B%BD%E7%9A%84%E9%A6%96%E9%83%BD%E3%80%82%22%0A%0A%0A%0A---%0A%0A%0A%23%20Chinese%20RoBERTa%20Miniatures%0A%0A%23%23%20Model%20description%0A%0AThis%20is%20the%20set%20of%2024%20Chinese%20RoBERTa%20models%20pre-trained%20by%20%5BUER-py%5D(https%3A%2F%2Fgithub.com%2Fdbiir%2FUER-py%2F)%2C%20which%20is%20introduced%20in%20%5Bthis%20paper%5D(https%3A%2F%2Farxiv.org%2Fabs%2F1909.05658).%0A%0A%5BTurc%20et%20al.%5D(https%3A%2F%2Farxiv.org%2Fabs%2F1908.08962)%20have%20shown%20that%20the%20standard%20BERT%20recipe%20is%20effective%20on%20a%20wide%20range%20of%20model%20sizes.%20Following%20their%20paper%2C%20we%20released%20the%2024%20Chinese%20RoBERTa%20models.%20In%20order%20to%20facilitate%20users%20to%20reproduce%20the%20results%2C%20we%20used%20the%20publicly%20available%20corpus%20and%20provided%20all%20training%20details.%0A%0AYou%20can%20download%20the%2024%20Chinese%20RoBERTa%20miniatures%20either%20from%20the%20%5BUER-py%20Modelzoo%20page%5D(https%3A%2F%2Fgithub.com%2Fdbiir%2FUER-py%2Fwiki%2FModelzoo)%2C%20or%20via%20HuggingFace%20from%20the%20links%20below%3A%0A%0A%7C%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%20%20%20%20H%3D128%20%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%20%20%20%20H%3D256%20%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%20%20%20%20%20H%3D512%20%20%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%20%20%20%20%20H%3D768%20%20%20%20%20%20%20%20%20%20%20%20%7C%0A%7C%20--------%20%7C%20%3A-----------------------%3A%20%7C%20%3A-----------------------%3A%20%7C%20%3A-------------------------%3A%20%7C%20%3A-------------------------%3A%20%7C%0A%7C%20**L%3D2**%20%20%7C%20%5B**2%2F128%20(Tiny)**%5D%5B2_128%5D%20%7C%20%20%20%20%20%20%5B2%2F256%5D%5B2_256%5D%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%5B2%2F512%5D%5B2_512%5D%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%5B2%2F768%5D%5B2_768%5D%20%20%20%20%20%20%20%20%7C%0A%7C%20**L%3D4**%20%20%7C%20%20%20%20%20%20%5B4%2F128%5D%5B4_128%5D%20%20%20%20%20%20%20%7C%20%5B**4%2F256%20(Mini)**%5D%5B4_256%5D%20%7C%20%5B**4%2F512%20(Small)**%5D%5B4_512%5D%20%20%7C%20%20%20%20%20%20%20%5B4%2F768%5D%5B4_768%5D%20%20%20%20%20%20%20%20%7C%0A%7C%20**L%3D6**%20%20%7C%20%20%20%20%20%20%5B6%2F128%5D%5B6_128%5D%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%5B6%2F256%5D%5B6_256%5D%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%5B6%2F512%5D%5B6_512%5D%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%5B6%2F768%5D%5B6_768%5D%20%20%20%20%20%20%20%20%7C%0A%7C%20**L%3D8**%20%20%7C%20%20%20%20%20%20%5B8%2F128%5D%5B8_128%5D%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%5B8%2F256%5D%5B8_256%5D%20%20%20%20%20%20%20%7C%20%5B**8%2F512%20(Medium)**%5D%5B8_512%5D%20%7C%20%20%20%20%20%20%20%5B8%2F768%5D%5B8_768%5D%20%20%20%20%20%20%20%20%7C%0A%7C%20**L%3D10**%20%7C%20%20%20%20%20%5B10%2F128%5D%5B10_128%5D%20%20%20%20%20%20%7C%20%20%20%20%20%5B10%2F256%5D%5B10_256%5D%20%20%20%20%20%20%7C%20%20%20%20%20%20%5B10%2F512%5D%5B10_512%5D%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%5B10%2F768%5D%5B10_768%5D%20%20%20%20%20%20%20%7C%0A%7C%20**L%3D12**%20%7C%20%20%20%20%20%5B12%2F128%5D%5B12_128%5D%20%20%20%20%20%20%7C%20%20%20%20%20%5B12%2F256%5D%5B12_256%5D%20%20%20%20%20%20%7C%20%20%20%20%20%20%5B12%2F512%5D%5B12_512%5D%20%20%20%20%20%20%20%7C%20%5B**12%2F768%20(Base)**%5D%5B12_768%5D%20%7C%0A%0AHere%20are%20scores%20on%20the%20devlopment%20set%20of%20six%20Chinese%20tasks%3A%0A%0A%7C%20Model%20%20%20%20%20%20%20%20%20%20%7C%20Score%20%7C%20douban%20%7C%20chnsenticorp%20%7C%20lcqmc%20%7C%20tnews(CLUE)%20%7C%20iflytek(CLUE)%20%7C%20ocnli(CLUE)%20%7C%0A%7C%20--------------%20%7C%20%3A---%3A%20%7C%20%3A----%3A%20%7C%20%3A----------%3A%20%7C%20%3A---%3A%20%7C%20%3A---------%3A%20%7C%20%3A-----------%3A%20%7C%20%3A---------%3A%20%7C%0A%7C%20RoBERTa-Tiny%20%20%20%7C%2072.3%20%20%7C%20%2083.0%20%20%7C%20%20%20%20%2091.4%20%20%20%20%20%7C%2081.8%20%20%7C%20%20%20%2062.0%20%20%20%20%20%7C%20%20%20%20%2055.0%20%20%20%20%20%20%7C%20%20%20%2060.3%20%20%20%20%20%7C%0A%7C%20RoBERTa-Mini%20%20%20%7C%2075.7%20%20%7C%20%2084.8%20%20%7C%20%20%20%20%2093.7%20%20%20%20%20%7C%2086.1%20%20%7C%20%20%20%2063.9%20%20%20%20%20%7C%20%20%20%20%2058.3%20%20%20%20%20%20%7C%20%20%20%2067.4%20%20%20%20%20%7C%0A%7C%20RoBERTa-Small%20%20%7C%2076.8%20%20%7C%20%2086.5%20%20%7C%20%20%20%20%2093.4%20%20%20%20%20%7C%2086.5%20%20%7C%20%20%20%2065.1%20%20%20%20%20%7C%20%20%20%20%2059.4%20%20%20%20%20%20%7C%20%20%20%2069.7%20%20%20%20%20%7C%0A%7C%20RoBERTa-Medium%20%7C%2077.8%20%20%7C%20%2087.6%20%20%7C%20%20%20%20%2094.8%20%20%20%20%20%7C%2088.1%20%20%7C%20%20%20%2065.6%20%20%20%20%20%7C%20%20%20%20%2059.5%20%20%20%20%20%20%7C%20%20%20%2071.2%20%20%20%20%20%7C%0A%7C%20RoBERTa-Base%20%20%20%7C%2079.5%20%20%7C%20%2089.1%20%20%7C%20%20%20%20%2095.2%20%20%20%20%20%7C%2089.2%20%20%7C%20%20%20%2067.0%20%20%20%20%20%7C%20%20%20%20%2060.9%20%20%20%20%20%20%7C%20%20%20%2075.5%20%20%20%20%20%7C%0A%0AFor%20each%20task%2C%20we%20selected%20the%20best%20fine-tuning%20hyperparameters%20from%20the%20lists%20below%2C%20and%20trained%20with%20the%20sequence%20length%20of%20128%3A%0A%0A-%20epochs%3A%203%2C%205%2C%208%0A-%20batch%20sizes%3A%2032%2C%2064%0A-%20learning%20rates%3A%203e-5%2C%201e-4%2C%203e-4%0A%0A%23%23%20How%20to%20use%0A%0AYou%20can%20use%20this%20model%20directly%20with%20a%20pipeline%20for%20masked%20language%20modeling%20(take%20the%20case%20of%20RoBERTa-Medium)%3A%0A%0A%60%60%60python%0A%3E%3E%3E%20from%20transformers%20import%20pipeline%0A%3E%3E%3E%20unmasker%20%3D%20pipeline(%27fill-mask%27%2C%20model%3D%27uer%2Fchinese_roberta_L-8_H-512%27)%0A%3E%3E%3E%20unmasker(%22%E4%B8%AD%E5%9B%BD%E7%9A%84%E9%A6%96%E9%83%BD%E6%98%AF%5BMASK%5D%E4%BA%AC%E3%80%82%22)%0A%5B%0A%20%20%20%20%7B%27sequence%27%3A%20%27%5BCLS%5D%20%E4%B8%AD%20%E5%9B%BD%20%E7%9A%84%20%E9%A6%96%20%E9%83%BD%20%E6%98%AF%20%E5%8C%97%20%E4%BA%AC%20%E3%80%82%20%5BSEP%5D%27%2C%20%0A%20%20%20%20%20%27score%27%3A%200.8701988458633423%2C%20%0A%20%20%20%20%20%27token%27%3A%201266%2C%20%0A%20%20%20%20%20%27token_str%27%3A%20%27%E5%8C%97%27%7D%2C%0A%20%20%20%20%7B%27sequence%27%3A%20%27%5BCLS%5D%20%E4%B8%AD%20%E5%9B%BD%20%E7%9A%84%20%E9%A6%96%20%E9%83%BD%20%E6%98%AF%20%E5%8D%97%20%E4%BA%AC%20%E3%80%82%20%5BSEP%5D%27%2C%0A%20%20%20%20%20%27score%27%3A%200.1194809079170227%2C%20%0A%20%20%20%20%20%27token%27%3A%201298%2C%20%0A%20%20%20%20%20%27token_str%27%3A%20%27%E5%8D%97%27%7D%2C%0A%20%20%20%20%7B%27sequence%27%3A%20%27%5BCLS%5D%20%E4%B8%AD%20%E5%9B%BD%20%E7%9A%84%20%E9%A6%96%20%E9%83%BD%20%E6%98%AF%20%E4%B8%9C%20%E4%BA%AC%20%E3%80%82%20%5BSEP%5D%27%2C%20%0A%20%20%20%20%20%27score%27%3A%200.0037803512532263994%2C%20%0A%20%20%20%20%20%27token%27%3A%20691%2C%20%0A%20%20%20%20%20%27token_str%27%3A%20%27%E4%B8%9C%27%7D%2C%0A%20%20%20%20%7B%27sequence%27%3A%20%27%5BCLS%5D%20%E4%B8%AD%20%E5%9B%BD%20%E7%9A%84%20%E9%A6%96%20%E9%83%BD%20%E6%98%AF%20%E6%99%AE%20%E4%BA%AC%20%E3%80%82%20%5BSEP%5D%27%2C%0A%20%20%20%20%20%27score%27%3A%200.0017127094324678183%2C%20%0A%20%20%20%20%20%27token%27%3A%203249%2C%0A%20%20%20%20%20%27token_str%27%3A%20%27%E6%99%AE%27%7D%2C%0A%20%20%20%20%7B%27sequence%27%3A%20%27%5BCLS%5D%20%E4%B8%AD%20%E5%9B%BD%20%E7%9A%84%20%E9%A6%96%20%E9%83%BD%20%E6%98%AF%20%E6%9C%9B%20%E4%BA%AC%20%E3%80%82%20%5BSEP%5D%27%2C%0A%20%20%20%20%20%27score%27%3A%200.001687526935711503%2C%0A%20%20%20%20%20%27token%27%3A%203307%2C%20%0A%20%20%20%20%20%27token_str%27%3A%20%27%E6%9C%9B%27%7D%0A%5D%0A%60%60%60%0A%0AHere%20is%20how%20to%20use%20this%20model%20to%20get%20the%20features%20of%20a%20given%20text%20in%20PyTorch%3A%0A%0A%60%60%60python%0Afrom%20transformers%20import%20BertTokenizer%2C%20BertModel%0Atokenizer%20%3D%20BertTokenizer.from_pretrained(%27uer%2Fchinese_roberta_L-8_H-512%27)%0Amodel%20%3D%20BertModel.from_pretrained(%22uer%2Fchinese_roberta_L-8_H-512%22)%0Atext%20%3D%20%22%E7%94%A8%E4%BD%A0%E5%96%9C%E6%AC%A2%E7%9A%84%E4%BB%BB%E4%BD%95%E6%96%87%E6%9C%AC%E6%9B%BF%E6%8D%A2%E6%88%91%E3%80%82%22%0Aencoded_input%20%3D%20tokenizer(text%2C%20return_tensors%3D%27pt%27)%0Aoutput%20%3D%20model(**encoded_input)%0A%60%60%60%0A%0Aand%20in%20TensorFlow%3A%0A%0A%60%60%60python%0Afrom%20transformers%20import%20BertTokenizer%2C%20TFBertModel%0Atokenizer%20%3D%20BertTokenizer.from_pretrained(%27uer%2Fchinese_roberta_L-8_H-512%27)%0Amodel%20%3D%20TFBertModel.from_pretrained(%22uer%2Fchinese_roberta_L-8_H-512%22)%0Atext%20%3D%20%22%E7%94%A8%E4%BD%A0%E5%96%9C%E6%AC%A2%E7%9A%84%E4%BB%BB%E4%BD%95%E6%96%87%E6%9C%AC%E6%9B%BF%E6%8D%A2%E6%88%91%E3%80%82%22%0Aencoded_input%20%3D%20tokenizer(text%2C%20return_tensors%3D%27tf%27)%0Aoutput%20%3D%20model(encoded_input)%0A%60%60%60%0A%0A%23%23%20Training%20data%0A%0A%5BCLUECorpusSmall%5D(https%3A%2F%2Fgithub.com%2FCLUEbenchmark%2FCLUECorpus2020%2F)%20is%20used%20as%20training%20data.%20We%20found%20that%20models%20pre-trained%20on%20CLUECorpusSmall%20outperform%20those%20pre-trained%20on%20CLUECorpus2020%2C%20although%20CLUECorpus2020%20is%20much%20larger%20than%20CLUECorpusSmall.%0A%0A%23%23%20Training%20procedure%0A%0AModels%20are%20pre-trained%20by%20%5BUER-py%5D(https%3A%2F%2Fgithub.com%2Fdbiir%2FUER-py%2F)%20on%20%5BTencent%20Cloud%5D(https%3A%2F%2Fcloud.tencent.com%2F).%20We%20pre-train%201%2C000%2C000%20steps%20with%20a%20sequence%20length%20of%20128%20and%20then%20pre-train%20250%2C000%20additional%20steps%20with%20a%20sequence%20length%20of%20512.%20We%20use%20the%20same%20hyper-parameters%20on%20different%20model%20sizes.%0A%0ATaking%20the%20case%20of%20RoBERTa-Medium%0A%0AStage1%3A%0A%0A%60%60%60%0Apython3%20preprocess.py%20--corpus_path%20corpora%2Fcluecorpussmall.txt%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--vocab_path%20models%2Fgoogle_zh_vocab.txt%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--dataset_path%20cluecorpussmall_seq128_dataset.pt%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--processes_num%2032%20--seq_length%20128%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--dynamic_masking%20--target%20mlm%0A%60%60%60%0A%0A%60%60%60%0Apython3%20pretrain.py%20--dataset_path%20cluecorpussmall_seq128_dataset.pt%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--vocab_path%20models%2Fgoogle_zh_vocab.txt%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--config_path%20models%2Fbert%2Fmedium_config.json%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--output_model_path%20models%2Fcluecorpussmall_roberta_medium_seq128_model.bin%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--world_size%208%20--gpu_ranks%200%201%202%203%204%205%206%207%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--total_steps%201000000%20--save_checkpoint_steps%20100000%20--report_steps%2050000%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--learning_rate%201e-4%20--batch_size%2064%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--embedding%20word_pos_seg%20--encoder%20transformer%20--mask%20fully_visible%20--target%20mlm%20--tie_weights%0A%60%60%60%0A%0AStage2%3A%0A%0A%60%60%60%0Apython3%20preprocess.py%20--corpus_path%20corpora%2Fcluecorpussmall.txt%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--vocab_path%20models%2Fgoogle_zh_vocab.txt%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--dataset_path%20cluecorpussmall_seq512_dataset.pt%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--processes_num%2032%20--seq_length%20512%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--dynamic_masking%20--target%20mlm%0A%60%60%60%0A%0A%60%60%60%0Apython3%20pretrain.py%20--dataset_path%20cluecorpussmall_seq512_dataset.pt%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--pretrained_model_path%20models%2Fcluecorpussmall_roberta_medium_seq128_model.bin-1000000%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--vocab_path%20models%2Fgoogle_zh_vocab.txt%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--config_path%20models%2Fbert%2Fmedium_config.json%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--output_model_path%20models%2Fcluecorpussmall_roberta_medium_seq512_model.bin%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--world_size%208%20--gpu_ranks%200%201%202%203%204%205%206%207%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--total_steps%20250000%20--save_checkpoint_steps%2050000%20--report_steps%2010000%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--learning_rate%205e-5%20--batch_size%2016%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--embedding%20word_pos_seg%20--encoder%20transformer%20--mask%20fully_visible%20--target%20mlm%20--tie_weights%0A%60%60%60%0A%0AFinally%2C%20we%20convert%20the%20pre-trained%20model%20into%20Huggingface%27s%20format%3A%0A%0A%60%60%60%0Apython3%20scripts%2Fconvert_bert_from_uer_to_huggingface.py%20--input_model_path%20models%2Fcluecorpussmall_roberta_medium_seq512_model.bin-250000%20%5C%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--output_model_path%20pytorch_model.bin%20%5C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--layers_num%208%20--target%20mlm%0A%60%60%60%0A%0A%23%23%23%20BibTeX%20entry%20and%20citation%20info%0A%0A%60%60%60%0A%40article%7Bdevlin2018bert%2C%0A%20%20title%3D%7BBert%3A%20Pre-training%20of%20deep%20bidirectional%20transformers%20for%20language%20understanding%7D%2C%0A%20%20author%3D%7BDevlin%2C%20Jacob%20and%20Chang%2C%20Ming-Wei%20and%20Lee%2C%20Kenton%20and%20Toutanova%2C%20Kristina%7D%2C%0A%20%20journal%3D%7BarXiv%20preprint%20arXiv%3A1810.04805%7D%2C%0A%20%20year%3D%7B2018%7D%0A%7D%0A%0A%40article%7Bliu2019roberta%2C%0A%20%20title%3D%7BRoberta%3A%20A%20robustly%20optimized%20bert%20pretraining%20approach%7D%2C%0A%20%20author%3D%7BLiu%2C%20Yinhan%20and%20Ott%2C%20Myle%20and%20Goyal%2C%20Naman%20and%20Du%2C%20Jingfei%20and%20Joshi%2C%20Mandar%20and%20Chen%2C%20Danqi%20and%20Levy%2C%20Omer%20and%20Lewis%2C%20Mike%20and%20Zettlemoyer%2C%20Luke%20and%20Stoyanov%2C%20Veselin%7D%2C%0A%20%20journal%3D%7BarXiv%20preprint%20arXiv%3A1907.11692%7D%2C%0A%20%20year%3D%7B2019%7D%0A%7D%0A%0A%40article%7Bturc2019%2C%0A%20%20title%3D%7BWell-Read%20Students%20Learn%20Better%3A%20On%20the%20Importance%20of%20Pre-training%20Compact%20Models%7D%2C%0A%20%20author%3D%7BTurc%2C%20Iulia%20and%20Chang%2C%20Ming-Wei%20and%20Lee%2C%20Kenton%20and%20Toutanova%2C%20Kristina%7D%2C%0A%20%20journal%3D%7BarXiv%20preprint%20arXiv%3A1908.08962v2%20%7D%2C%0A%20%20year%3D%7B2019%7D%0A%7D%0A%0A%40article%7Bzhao2019uer%2C%0A%20%20title%3D%7BUER%3A%20An%20Open-Source%20Toolkit%20for%20Pre-training%20Models%7D%2C%0A%20%20author%3D%7BZhao%2C%20Zhe%20and%20Chen%2C%20Hui%20and%20Zhang%2C%20Jinbin%20and%20Zhao%2C%20Xin%20and%20Liu%2C%20Tao%20and%20Lu%2C%20Wei%20and%20Chen%2C%20Xi%20and%20Deng%2C%20Haotang%20and%20Ju%2C%20Qi%20and%20Du%2C%20Xiaoyong%7D%2C%0A%20%20journal%3D%7BEMNLP-IJCNLP%202019%7D%2C%0A%20%20pages%3D%7B241%7D%2C%0A%20%20year%3D%7B2019%7D%0A%7D%0A%60%60%60%0A%0A%5B2_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-2_H-128%0A%5B2_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-2_H-256%0A%5B2_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-2_H-512%0A%5B2_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-2_H-768%0A%5B4_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-4_H-128%0A%5B4_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-4_H-256%0A%5B4_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-4_H-512%0A%5B4_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-4_H-768%0A%5B6_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-6_H-128%0A%5B6_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-6_H-256%0A%5B6_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-6_H-512%0A%5B6_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-6_H-768%0A%5B8_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-8_H-128%0A%5B8_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-8_H-256%0A%5B8_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-8_H-512%0A%5B8_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-8_H-768%0A%5B10_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-10_H-128%0A%5B10_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-10_H-256%0A%5B10_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-10_H-512%0A%5B10_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-10_H-768%0A%5B12_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-12_H-128%0A%5B12_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-12_H-256%0A%5B12_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-12_H-512%0A%5B12_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-12_H-768&options=%7B%0A%20%22baseUrl%22%3A%20null%2C%0A%20%22breaks%22%3A%20false%2C%0A%20%22extensions%22%3A%20null%2C%0A%20%22gfm%22%3A%20true%2C%0A%20%22headerIds%22%3A%20true%2C%0A%20%22headerPrefix%22%3A%20%22%22%2C%0A%20%22highlight%22%3A%20null%2C%0A%20%22langPrefix%22%3A%20%22language-%22%2C%0A%20%22mangle%22%3A%20true%2C%0A%20%22pedantic%22%3A%20false%2C%0A%20%22sanitize%22%3A%20false%2C%0A%20%22sanitizer%22%3A%20null%2C%0A%20%22silent%22%3A%20false%2C%0A%20%22smartLists%22%3A%20false%2C%0A%20%22smartypants%22%3A%20false%2C%0A%20%22tokenizer%22%3A%20null%2C%0A%20%22walkTokens%22%3A%20null%2C%0A%20%22xhtml%22%3A%20false%0A%7D&version=2.1.3)
Switching to markedjs v3 breaks it<|||||>@elishowk
Actually, when copying the whole model card content, the online markdownjs demo [also fails to render properly](https://marked.js.org/demo/?text=---%0Alanguage%3A%20Chinese%0Adatasets%3A%20CLUECorpusSmall%0Awidget%3A%20%0A-%20text%3A%20%22%E5%8C%97%E4%BA%AC%E6%98%AF%5BMASK%5D%E5%9B%BD%E7%9A%84%E9%A6%96%E9%83%BD%E3%80%82%22%0A%0A%0A%0A---%0A%0A%0A%23%20Chinese%20RoBERTa%20Miniatures%0A%0A%23%23%20Model%20description%0A%0AThis%20is%20the%20set%20of%2024%20Chinese%20RoBERTa%20models%20pre-trained%20by%20%5BUER-py%5D(https%3A%2F%2Fgithub.com%2Fdbiir%2FUER-py%2F)%2C%20which%20is%20introduced%20in%20%5Bthis%20paper%5D(https%3A%2F%2Farxiv.org%2Fabs%2F1909.05658).%0A%0A%5BTurc%20et%20al.%5D(https%3A%2F%2Farxiv.org%2Fabs%2F1908.08962)%20have%20shown%20that%20the%20standard%20BERT%20recipe%20is%20effective%20on%20a%20wide%20range%20of%20model%20sizes.%20Following%20their%20paper%2C%20we%20released%20the%2024%20Chinese%20RoBERTa%20models.%20In%20order%20to%20facilitate%20users%20to%20reproduce%20the%20results%2C%20we%20used%20the%20publicly%20available%20corpus%20and%20provided%20all%20training%20details.%0A%0AYou%20can%20download%20the%2024%20Chinese%20RoBERTa%20miniatures%20either%20from%20the%20%5BUER-py%20Modelzoo%20page%5D(https%3A%2F%2Fgithub.com%2Fdbiir%2FUER-py%2Fwiki%2FModelzoo)%2C%20or%20via%20HuggingFace%20from%20the%20links%20below%3A%0A%0A%7C%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%20%20%20%20H%3D128%20%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%20%20%20%20H%3D256%20%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%20%20%20%20%20H%3D512%20%20%20%20%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%20%20%20%20%20H%3D768%20%20%20%20%20%20%20%20%20%20%20%20%7C%0A%7C%20--------%20%7C%20%3A-----------------------%3A%20%7C%20%3A-----------------------%3A%20%7C%20%3A-------------------------%3A%20%7C%20%3A-------------------------%3A%20%7C%0A%7C%20**L%3D2**%20%20%7C%20%5B**2%2F128%20(Tiny)**%5D%5B2_128%5D%20%7C%20%20%20%20%20%20%5B2%2F256%5D%5B2_256%5D%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%5B2%2F512%5D%5B2_512%5D%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%5B2%2F768%5D%5B2_768%5D%20%20%20%20%20%20%20%20%7C%0A%7C%20**L%3D4**%20%20%7C%20%20%20%20%20%20%5B4%2F128%5D%5B4_128%5D%20%20%20%20%20%20%20%7C%20%5B**4%2F256%20(Mini)**%5D%5B4_256%5D%20%7C%20%5B**4%2F512%20(Small)**%5D%5B4_512%5D%20%20%7C%20%20%20%20%20%20%20%5B4%2F768%5D%5B4_768%5D%20%20%20%20%20%20%20%20%7C%0A%7C%20**L%3D6**%20%20%7C%20%20%20%20%20%20%5B6%2F128%5D%5B6_128%5D%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%5B6%2F256%5D%5B6_256%5D%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%5B6%2F512%5D%5B6_512%5D%20%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%20%5B6%2F768%5D%5B6_768%5D%20%20%20%20%20%20%20%20%7C%0A%7C%20**L%3D8**%20%20%7C%20%20%20%20%20%20%5B8%2F128%5D%5B8_128%5D%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%5B8%2F256%5D%5B8_256%5D%20%20%20%20%20%20%20%7C%20%5B**8%2F512%20(Medium)**%5D%5B8_512%5D%20%7C%20%20%20%20%20%20%20%5B8%2F768%5D%5B8_768%5D%20%20%20%20%20%20%20%20%7C%0A%7C%20**L%3D10**%20%7C%20%20%20%20%20%5B10%2F128%5D%5B10_128%5D%20%20%20%20%20%20%7C%20%20%20%20%20%5B10%2F256%5D%5B10_256%5D%20%20%20%20%20%20%7C%20%20%20%20%20%20%5B10%2F512%5D%5B10_512%5D%20%20%20%20%20%20%20%7C%20%20%20%20%20%20%5B10%2F768%5D%5B10_768%5D%20%20%20%20%20%20%20%7C%0A%7C%20**L%3D12**%20%7C%20%20%20%20%20%5B12%2F128%5D%5B12_128%5D%20%20%20%20%20%20%7C%20%20%20%20%20%5B12%2F256%5D%5B12_256%5D%20%20%20%20%20%20%7C%20%20%20%20%20%20%5B12%2F512%5D%5B12_512%5D%20%20%20%20%20%20%20%7C%20%5B**12%2F768%20(Base)**%5D%5B12_768%5D%20%7C%0A%0AHere%20are%20scores%20on%20the%20devlopment%20set%20of%20six%20Chinese%20tasks%3A%0A%0A%7C%20Model%20%20%20%20%20%20%20%20%20%20%7C%20Score%20%7C%20douban%20%7C%20chnsenticorp%20%7C%20lcqmc%20%7C%20tnews(CLUE)%20%7C%20iflytek(CLUE)%20%7C%20ocnli(CLUE)%20%7C%0A%7C%20--------------%20%7C%20%3A---%3A%20%7C%20%3A----%3A%20%7C%20%3A----------%3A%20%7C%20%3A---%3A%20%7C%20%3A---------%3A%20%7C%20%3A-----------%3A%20%7C%20%3A---------%3A%20%7C%0A%7C%20RoBERTa-Tiny%20%20%20%7C%2072.3%20%20%7C%20%2083.0%20%20%7C%20%20%20%20%2091.4%20%20%20%20%20%7C%2081.8%20%20%7C%20%20%20%2062.0%20%20%20%20%20%7C%20%20%20%20%2055.0%20%20%20%20%20%20%7C%20%20%20%2060.3%20%20%20%20%20%7C%0A%7C%20RoBERTa-Mini%20%20%20%7C%2075.7%20%20%7C%20%2084.8%20%20%7C%20%20%20%20%2093.7%20%20%20%20%20%7C%2086.1%20%20%7C%20%20%20%2063.9%20%20%20%20%20%7C%20%20%20%20%2058.3%20%20%20%20%20%20%7C%20%20%20%2067.4%20%20%20%20%20%7C%0A%7C%20RoBERTa-Small%20%20%7C%2076.8%20%20%7C%20%2086.5%20%20%7C%20%20%20%20%2093.4%20%20%20%20%20%7C%2086.5%20%20%7C%20%20%20%2065.1%20%20%20%20%20%7C%20%20%20%20%2059.4%20%20%20%20%20%20%7C%20%20%20%2069.7%20%20%20%20%20%7C%0A%7C%20RoBERTa-Medium%20%7C%2077.8%20%20%7C%20%2087.6%20%20%7C%20%20%20%20%2094.8%20%20%20%20%20%7C%2088.1%20%20%7C%20%20%20%2065.6%20%20%20%20%20%7C%20%20%20%20%2059.5%20%20%20%20%20%20%7C%20%20%20%2071.2%20%20%20%20%20%7C%0A%7C%20RoBERTa-Base%20%20%20%7C%2079.5%20%20%7C%20%2089.1%20%20%7C%20%20%20%20%2095.2%20%20%20%20%20%7C%2089.2%20%20%7C%20%20%20%2067.0%20%20%20%20%20%7C%20%20%20%20%2060.9%20%20%20%20%20%20%7C%20%20%20%2075.5%20%20%20%20%20%7C%0A%0AFor%20each%20task%2C%20we%20selected%20the%20best%20fine-tuning%20hyperparameters%20from%20the%20lists%20below%2C%20and%20trained%20with%20the%20sequence%20length%20of%20128%3A%0A%0A-%20epochs%3A%203%2C%205%2C%208%0A-%20batch%20sizes%3A%2032%2C%2064%0A-%20learning%20rates%3A%203e-5%2C%201e-4%2C%203e-4%0A%0A%23%23%20How%20to%20use%0A%0AYou%20can%20use%20this%20model%20directly%20with%20a%20pipeline%20for%20masked%20language%20modeling%20(take%20the%20case%20of%20RoBERTa-Medium)%3A%0A%0A%60%60%60python%0A%3E%3E%3E%20from%20transformers%20import%20pipeline%0A%3E%3E%3E%20unmasker%20%3D%20pipeline(%27fill-mask%27%2C%20model%3D%27uer%2Fchinese_roberta_L-8_H-512%27)%0A%3E%3E%3E%20unmasker(%22%E4%B8%AD%E5%9B%BD%E7%9A%84%E9%A6%96%E9%83%BD%E6%98%AF%5BMASK%5D%E4%BA%AC%E3%80%82%22)%0A%5B%0A%20%20%20%20%7B%27sequence%27%3A%20%27%5BCLS%5D%20%E4%B8%AD%20%E5%9B%BD%20%E7%9A%84%20%E9%A6%96%20%E9%83%BD%20%E6%98%AF%20%E5%8C%97%20%E4%BA%AC%20%E3%80%82%20%5BSEP%5D%27%2C%20%0A%20%20%20%20%20%27score%27%3A%200.8701988458633423%2C%20%0A%20%20%20%20%20%27token%27%3A%201266%2C%20%0A%20%20%20%20%20%27token_str%27%3A%20%27%E5%8C%97%27%7D%2C%0A%20%20%20%20%7B%27sequence%27%3A%20%27%5BCLS%5D%20%E4%B8%AD%20%E5%9B%BD%20%E7%9A%84%20%E9%A6%96%20%E9%83%BD%20%E6%98%AF%20%E5%8D%97%20%E4%BA%AC%20%E3%80%82%20%5BSEP%5D%27%2C%0A%20%20%20%20%20%27score%27%3A%200.1194809079170227%2C%20%0A%20%20%20%20%20%27token%27%3A%201298%2C%20%0A%20%20%20%20%20%27token_str%27%3A%20%27%E5%8D%97%27%7D%2C%0A%20%20%20%20%7B%27sequence%27%3A%20%27%5BCLS%5D%20%E4%B8%AD%20%E5%9B%BD%20%E7%9A%84%20%E9%A6%96%20%E9%83%BD%20%E6%98%AF%20%E4%B8%9C%20%E4%BA%AC%20%E3%80%82%20%5BSEP%5D%27%2C%20%0A%20%20%20%20%20%27score%27%3A%200.0037803512532263994%2C%20%0A%20%20%20%20%20%27token%27%3A%20691%2C%20%0A%20%20%20%20%20%27token_str%27%3A%20%27%E4%B8%9C%27%7D%2C%0A%20%20%20%20%7B%27sequence%27%3A%20%27%5BCLS%5D%20%E4%B8%AD%20%E5%9B%BD%20%E7%9A%84%20%E9%A6%96%20%E9%83%BD%20%E6%98%AF%20%E6%99%AE%20%E4%BA%AC%20%E3%80%82%20%5BSEP%5D%27%2C%0A%20%20%20%20%20%27score%27%3A%200.0017127094324678183%2C%20%0A%20%20%20%20%20%27token%27%3A%203249%2C%0A%20%20%20%20%20%27token_str%27%3A%20%27%E6%99%AE%27%7D%2C%0A%20%20%20%20%7B%27sequence%27%3A%20%27%5BCLS%5D%20%E4%B8%AD%20%E5%9B%BD%20%E7%9A%84%20%E9%A6%96%20%E9%83%BD%20%E6%98%AF%20%E6%9C%9B%20%E4%BA%AC%20%E3%80%82%20%5BSEP%5D%27%2C%0A%20%20%20%20%20%27score%27%3A%200.001687526935711503%2C%0A%20%20%20%20%20%27token%27%3A%203307%2C%20%0A%20%20%20%20%20%27token_str%27%3A%20%27%E6%9C%9B%27%7D%0A%5D%0A%60%60%60%0A%0AHere%20is%20how%20to%20use%20this%20model%20to%20get%20the%20features%20of%20a%20given%20text%20in%20PyTorch%3A%0A%0A%60%60%60python%0Afrom%20transformers%20import%20BertTokenizer%2C%20BertModel%0Atokenizer%20%3D%20BertTokenizer.from_pretrained(%27uer%2Fchinese_roberta_L-8_H-512%27)%0Amodel%20%3D%20BertModel.from_pretrained(%22uer%2Fchinese_roberta_L-8_H-512%22)%0Atext%20%3D%20%22%E7%94%A8%E4%BD%A0%E5%96%9C%E6%AC%A2%E7%9A%84%E4%BB%BB%E4%BD%95%E6%96%87%E6%9C%AC%E6%9B%BF%E6%8D%A2%E6%88%91%E3%80%82%22%0Aencoded_input%20%3D%20tokenizer(text%2C%20return_tensors%3D%27pt%27)%0Aoutput%20%3D%20model(**encoded_input)%0A%60%60%60%0A%0Aand%20in%20TensorFlow%3A%0A%0A%60%60%60python%0Afrom%20transformers%20import%20BertTokenizer%2C%20TFBertModel%0Atokenizer%20%3D%20BertTokenizer.from_pretrained(%27uer%2Fchinese_roberta_L-8_H-512%27)%0Amodel%20%3D%20TFBertModel.from_pretrained(%22uer%2Fchinese_roberta_L-8_H-512%22)%0Atext%20%3D%20%22%E7%94%A8%E4%BD%A0%E5%96%9C%E6%AC%A2%E7%9A%84%E4%BB%BB%E4%BD%95%E6%96%87%E6%9C%AC%E6%9B%BF%E6%8D%A2%E6%88%91%E3%80%82%22%0Aencoded_input%20%3D%20tokenizer(text%2C%20return_tensors%3D%27tf%27)%0Aoutput%20%3D%20model(encoded_input)%0A%60%60%60%0A%0A%23%23%20Training%20data%0A%0A%5BCLUECorpusSmall%5D(https%3A%2F%2Fgithub.com%2FCLUEbenchmark%2FCLUECorpus2020%2F)%20is%20used%20as%20training%20data.%20We%20found%20that%20models%20pre-trained%20on%20CLUECorpusSmall%20outperform%20those%20pre-trained%20on%20CLUECorpus2020%2C%20although%20CLUECorpus2020%20is%20much%20larger%20than%20CLUECorpusSmall.%0A%0A%23%23%20Training%20procedure%0A%0AModels%20are%20pre-trained%20by%20%5BUER-py%5D(https%3A%2F%2Fgithub.com%2Fdbiir%2FUER-py%2F)%20on%20%5BTencent%20Cloud%5D(https%3A%2F%2Fcloud.tencent.com%2F).%20We%20pre-train%201%2C000%2C000%20steps%20with%20a%20sequence%20length%20of%20128%20and%20then%20pre-train%20250%2C000%20additional%20steps%20with%20a%20sequence%20length%20of%20512.%20We%20use%20the%20same%20hyper-parameters%20on%20different%20model%20sizes.%0A%0ATaking%20the%20case%20of%20RoBERTa-Medium%0A%0AStage1%3A%0A%0A%60%60%60%0Apython3%20preprocess.py%20--corpus_path%20corpora%2Fcluecorpussmall.txt%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--vocab_path%20models%2Fgoogle_zh_vocab.txt%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--dataset_path%20cluecorpussmall_seq128_dataset.pt%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--processes_num%2032%20--seq_length%20128%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--dynamic_masking%20--target%20mlm%0A%60%60%60%0A%0A%60%60%60%0Apython3%20pretrain.py%20--dataset_path%20cluecorpussmall_seq128_dataset.pt%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--vocab_path%20models%2Fgoogle_zh_vocab.txt%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--config_path%20models%2Fbert%2Fmedium_config.json%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--output_model_path%20models%2Fcluecorpussmall_roberta_medium_seq128_model.bin%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--world_size%208%20--gpu_ranks%200%201%202%203%204%205%206%207%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--total_steps%201000000%20--save_checkpoint_steps%20100000%20--report_steps%2050000%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--learning_rate%201e-4%20--batch_size%2064%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--embedding%20word_pos_seg%20--encoder%20transformer%20--mask%20fully_visible%20--target%20mlm%20--tie_weights%0A%60%60%60%0A%0AStage2%3A%0A%0A%60%60%60%0Apython3%20preprocess.py%20--corpus_path%20corpora%2Fcluecorpussmall.txt%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--vocab_path%20models%2Fgoogle_zh_vocab.txt%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--dataset_path%20cluecorpussmall_seq512_dataset.pt%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--processes_num%2032%20--seq_length%20512%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--dynamic_masking%20--target%20mlm%0A%60%60%60%0A%0A%60%60%60%0Apython3%20pretrain.py%20--dataset_path%20cluecorpussmall_seq512_dataset.pt%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--pretrained_model_path%20models%2Fcluecorpussmall_roberta_medium_seq128_model.bin-1000000%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--vocab_path%20models%2Fgoogle_zh_vocab.txt%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--config_path%20models%2Fbert%2Fmedium_config.json%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--output_model_path%20models%2Fcluecorpussmall_roberta_medium_seq512_model.bin%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--world_size%208%20--gpu_ranks%200%201%202%203%204%205%206%207%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--total_steps%20250000%20--save_checkpoint_steps%2050000%20--report_steps%2010000%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--learning_rate%205e-5%20--batch_size%2016%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--embedding%20word_pos_seg%20--encoder%20transformer%20--mask%20fully_visible%20--target%20mlm%20--tie_weights%0A%60%60%60%0A%0AFinally%2C%20we%20convert%20the%20pre-trained%20model%20into%20Huggingface%27s%20format%3A%0A%0A%60%60%60%0Apython3%20scripts%2Fconvert_bert_from_uer_to_huggingface.py%20--input_model_path%20models%2Fcluecorpussmall_roberta_medium_seq512_model.bin-250000%20%5C%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--output_model_path%20pytorch_model.bin%20%5C%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20--layers_num%208%20--target%20mlm%0A%60%60%60%0A%0A%23%23%23%20BibTeX%20entry%20and%20citation%20info%0A%0A%60%60%60%0A%40article%7Bdevlin2018bert%2C%0A%20%20title%3D%7BBert%3A%20Pre-training%20of%20deep%20bidirectional%20transformers%20for%20language%20understanding%7D%2C%0A%20%20author%3D%7BDevlin%2C%20Jacob%20and%20Chang%2C%20Ming-Wei%20and%20Lee%2C%20Kenton%20and%20Toutanova%2C%20Kristina%7D%2C%0A%20%20journal%3D%7BarXiv%20preprint%20arXiv%3A1810.04805%7D%2C%0A%20%20year%3D%7B2018%7D%0A%7D%0A%0A%40article%7Bliu2019roberta%2C%0A%20%20title%3D%7BRoberta%3A%20A%20robustly%20optimized%20bert%20pretraining%20approach%7D%2C%0A%20%20author%3D%7BLiu%2C%20Yinhan%20and%20Ott%2C%20Myle%20and%20Goyal%2C%20Naman%20and%20Du%2C%20Jingfei%20and%20Joshi%2C%20Mandar%20and%20Chen%2C%20Danqi%20and%20Levy%2C%20Omer%20and%20Lewis%2C%20Mike%20and%20Zettlemoyer%2C%20Luke%20and%20Stoyanov%2C%20Veselin%7D%2C%0A%20%20journal%3D%7BarXiv%20preprint%20arXiv%3A1907.11692%7D%2C%0A%20%20year%3D%7B2019%7D%0A%7D%0A%0A%40article%7Bturc2019%2C%0A%20%20title%3D%7BWell-Read%20Students%20Learn%20Better%3A%20On%20the%20Importance%20of%20Pre-training%20Compact%20Models%7D%2C%0A%20%20author%3D%7BTurc%2C%20Iulia%20and%20Chang%2C%20Ming-Wei%20and%20Lee%2C%20Kenton%20and%20Toutanova%2C%20Kristina%7D%2C%0A%20%20journal%3D%7BarXiv%20preprint%20arXiv%3A1908.08962v2%20%7D%2C%0A%20%20year%3D%7B2019%7D%0A%7D%0A%0A%40article%7Bzhao2019uer%2C%0A%20%20title%3D%7BUER%3A%20An%20Open-Source%20Toolkit%20for%20Pre-training%20Models%7D%2C%0A%20%20author%3D%7BZhao%2C%20Zhe%20and%20Chen%2C%20Hui%20and%20Zhang%2C%20Jinbin%20and%20Zhao%2C%20Xin%20and%20Liu%2C%20Tao%20and%20Lu%2C%20Wei%20and%20Chen%2C%20Xi%20and%20Deng%2C%20Haotang%20and%20Ju%2C%20Qi%20and%20Du%2C%20Xiaoyong%7D%2C%0A%20%20journal%3D%7BEMNLP-IJCNLP%202019%7D%2C%0A%20%20pages%3D%7B241%7D%2C%0A%20%20year%3D%7B2019%7D%0A%7D%0A%60%60%60%0A%0A%5B2_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-2_H-128%0A%5B2_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-2_H-256%0A%5B2_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-2_H-512%0A%5B2_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-2_H-768%0A%5B4_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-4_H-128%0A%5B4_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-4_H-256%0A%5B4_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-4_H-512%0A%5B4_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-4_H-768%0A%5B6_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-6_H-128%0A%5B6_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-6_H-256%0A%5B6_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-6_H-512%0A%5B6_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-6_H-768%0A%5B8_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-8_H-128%0A%5B8_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-8_H-256%0A%5B8_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-8_H-512%0A%5B8_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-8_H-768%0A%5B10_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-10_H-128%0A%5B10_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-10_H-256%0A%5B10_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-10_H-512%0A%5B10_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-10_H-768%0A%5B12_128%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-12_H-128%0A%5B12_256%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-12_H-256%0A%5B12_512%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-12_H-512%0A%5B12_768%5D%3Ahttps%3A%2F%2Fhuggingface.co%2Fuer%2Fchinese_roberta_L-12_H-768&options=%7B%0A%20%22baseUrl%22%3A%20null%2C%0A%20%22breaks%22%3A%20false%2C%0A%20%22extensions%22%3A%20null%2C%0A%20%22gfm%22%3A%20true%2C%0A%20%22headerIds%22%3A%20true%2C%0A%20%22headerPrefix%22%3A%20%22%22%2C%0A%20%22highlight%22%3A%20null%2C%0A%20%22langPrefix%22%3A%20%22language-%22%2C%0A%20%22mangle%22%3A%20true%2C%0A%20%22pedantic%22%3A%20false%2C%0A%20%22sanitize%22%3A%20false%2C%0A%20%22sanitizer%22%3A%20null%2C%0A%20%22silent%22%3A%20false%2C%0A%20%22smartLists%22%3A%20false%2C%0A%20%22smartypants%22%3A%20false%2C%0A%20%22tokenizer%22%3A%20null%2C%0A%20%22walkTokens%22%3A%20null%2C%0A%20%22xhtml%22%3A%20false%0A%7D&version=master) the readme content
<|||||>@elishowk if you try different Marked versions, you'll see that 2.x.x works fine, but 3.x.x doesn't<|||||>However the same markdown code works fine with other parsers, for instance https://dillinger.io/, https://stackedit.io/ and https://markdownlivepreview.com/
So it seems that a bug was introduced in Marked with the 3.0 version. One way to solve this in the short term would be to switch back to the last 2.x Marked version, and hope it doesn't create more issues.<|||||>thanks @beurkinger I'm on it<|||||>(I don’t remember why we upgraded marked, maybe it was me and I didn’t
check this model card)
On Tue, Sep 28, 2021 at 12:04 PM elishowk ***@***.***> wrote:
> thanks @beurkinger <https://github.com/beurkinger> I'm on it
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/13558#issuecomment-929044197>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AACPXMMAME3P5O6ERJXWVCTUEGHL3ANCNFSM5D7W5ZKQ>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
>
<|||||>Hi everyone, this problem is resolved now.
Thanks |
transformers | 13,557 | closed | How to use transformers pipeline with multi-gpu? | `ner_model = pipeline('ner', model=model, tokenizer=tokenizer, device=0, grouped_entities=True)`
the device indicated pipeline to use no_gpu=0(only using GPU), please show me how to use multi-gpu. | 09-14-2021 06:23:30 | 09-14-2021 06:23:30 | same question...<|||||>Currently this information is not supported.
You could to `n pipelines` on the `n` devices as a workaround, would that work ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,556 | closed | Mixture of non-prefixed and prefixed (B-, I-) |
# What does this PR do?
Reconsider of #13493 for mixture case, where B-TAG and I-TAG are used for multi-tokens and TAG is used for a single token.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil
| 09-14-2021 05:09:42 | 09-14-2021 05:09:42 | This seems a bit overboard. I don't think mixed tags is desirable. (it's very uneasy to know what a model will do.
The previous fix was about being backward compatible, this is a proposed breaking change I think.
However I am not necessarily having all information.
@LysandreJik do you have an opinion ?<|||||>Pinging @stefan-it as he has extensive experience with NER, maybe you can enlighten us? :)<|||||>Yeah, I think this is also related to this issue:
https://github.com/huggingface/transformers/issues/4262
tldr: some of the NER datasets use IOB1 as labeling scheme (namely CoNLL-2003 English and German).<|||||>Hi @stefan-it , but reading the conversation it just confirms that using I- only is the way to go, and that we cannot have same but separated entity splits with this scheme.
It seems that the proposed PR wouldn't actually solve the problem either because B- are never outputted, right ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,555 | closed | Mismatch of implementations of attention mask in transformers and tokenizers | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.2
Models: BERT and other Transformer models
Library:
- tokenizers: @LysandreJik
## Problem and solution:
<!-- Problems and solution -->
In the current implementation of transformers, the attention mask is added to the attention scores before softmax. For instance: https://github.com/huggingface/transformers/blob/3ab0185b061baae207efed02799dd424ee8377f1/src/transformers/models/bert/modeling_bert.py#L326
When this kind of additive mask is used before softmax, then the masking values should be -inf. However, the tokenizers of transformers library outputs 0 for the attention masks (for paddings) and 1 for the other elements: https://huggingface.co/transformers/glossary.html#attention-mask
This would have very unpredictable behaviour on the final attention values after softmax. Sometimes the attention mask may have no effect at all depending on the magnitude of the attention scores before softmax.
An alternative implementation is using multiplicative or logical_and to apply attention mask to attention scores (Pytorch, Tensorflow, Jax). Then it is fine to assign value 0 to paddings in the attention mask.
| 09-14-2021 03:54:10 | 09-14-2021 03:54:10 | Note that the head mask is implemented using multiplication: https://github.com/huggingface/transformers/blob/3ab0185b061baae207efed02799dd424ee8377f1/src/transformers/models/bert/modeling_bert.py#L337<|||||>Hi,
the `attention_mask` is first preprocessed by a function called `get_extended_attention_mask` as can be seen [here](https://github.com/huggingface/transformers/blob/51e5eca612d24896165ba8f7c83ecd0e8f695aa4/src/transformers/models/bert/modeling_bert.py#L963). This function is defined in `modeling_utils.py`. This function turns the `attention_mask` into a tensor that is 0 for positions we want to attend to, and -10.000 for positions we don't want to attend to as seen [here](https://github.com/huggingface/transformers/blob/51e5eca612d24896165ba8f7c83ecd0e8f695aa4/src/transformers/modeling_utils.py#L294-L301). <|||||>@NielsRogge Ah, I see. That makes sense. Thanks for the clarification.
However, are there any reasons why it was implemented this way? Because at least on the surface, it makes the code slightly more complicated, and more confusing and difficult to understand. The multiplicative head mask just a few lines after makes it even more confusing to people who are not fully familiar with the code base. And is -10000 enough to guarantee that the behaviour is always predictable (I am not sure, maybe it is)?
If there are specific reasons why it has been implemented this way, it might be a good idea to add a few comments at https://github.com/huggingface/transformers/blob/3ab0185b061baae207efed02799dd424ee8377f1/src/transformers/models/bert/modeling_bert.py#L326 and https://github.com/huggingface/transformers/blob/51e5eca612d24896165ba8f7c83ecd0e8f695aa4/src/transformers/models/bert/modeling_bert.py#L963
so that people don't have to dig deep into the code to find out what is going on.
Just some suggestions, otherwise this issue can be considered closed.
<|||||>> However, are there any reasons why it was implemented this way?
This was because the original BERT authors implemented it that way.
> It might be a good idea to add a few comments
Fair point! cc @LysandreJik
<|||||>Indeed, adding a few comments would be helpful! Do you want to try your hand at it?<|||||>@LysandreJik Did you mean me or @NielsRogge ?
If it is not urgent, I can do it sometime next week.<|||||>Glad to hear it @superRookie007, looking forward to it!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,554 | closed | LayoutLMv2 processing doesn't handle tokenizer overflow | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.2
- Platform: MAC
- Python version: 3.8.9
- PyTorch version (GPU?): Not important
- Tensorflow version (GPU?):
- Using GPU in script? Mp
- Using distributed or parallel set-up in script?:
### Who can help
@NielsRogge @LysandreJik
## Information
We are porting our layoutlmv2 project to use Transformers instead of the UniLMFt package.
The additional functionality of the tokenizer has helped us to eliminate a good deal of alignment code!!
While evaluating `processing_layoutlmv2.py` I noticed that overflow wasn't being handled properly.
https://github.com/huggingface/transformers/blob/3ab0185b061baae207efed02799dd424ee8377f1/src/transformers/models/layoutlmv2/processing_layoutlmv2.py#L182-L205
In the above block the input is tokeninzed, potentially with allowing overflow when `return_overflowing_tokens==True` this will cause the length of encoded_inputs to be longer than the input sequence. EG if a page has 1k words and boxes it will be returned as two sequences and there will be an `overflow_to_sample_mapping` attached to the encoded_inputs.
When adding the image
```
encoded_inputs["image"] = features.pop("pixel_values")
```
The length of `image` will be less than the rest of the encoded inputs if there is any overflow. This will cause: 1) a mismatch between page images and examples, and 2) examples at the end of the sequence will lack image embeddings.
## Expected behavior
We handle this by using the `overflow_to_sample_mapping` to find which image to pair with each sequence in the batch:
```
images = []
for batch_index in range(len(tokenized_inputs["input_ids"])):
org_batch_index = tokenized_inputs["overflow_to_sample_mapping"][batch_index]
image = examples["image"][org_batch_index]
images.append(image)
tokenized_inputs['image] = images
```
`return_offsets_mapping=True` is required for this to work, but you could consider raising if `return_overflowing_tokens` is True and `return_offset_mapping` is False to maintain the ability to pair images with the correct sequences.
| 09-14-2021 01:44:44 | 09-14-2021 01:44:44 | Oh yes, thanks for spotting this. I added the overflow logic after implementing the processor. Will open a PR to fix this.<|||||>@NielsRogge , Any update on this?<|||||>@timothyjlaurent do you mind opening a PR for this?<|||||>@NielsRogge I'll try to get it in this week.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Any update here @timothyjlaurent? Would be great if you can contribute this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge I'll take a look at this. <|||||>Opened a PR @NielsRogge #17092. Just need your input in terms of the return type of `encoded_input["image"]` but otherwise this PR should fix the issue. |
transformers | 13,553 | closed | `prediction_loss_only` = False returns `float.detatch()` error witrh HF trainer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.2
- Platform:
- Python version: 3.7
- PyTorch version (GPU?): 1.9.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, mBERT...):
The problem arises when using:
* [x ] my own modified scripts: override `predict()` and `evaluate()` class
The tasks I am working on is:
* [x ] my own task or dataset: XTREME multilingual tasks
## To reproduce
I was using transformers v2.11 and it worked fine. I upgraded to the current version of the `transformers` for using `load_best_model_at_end`. It appears when I set flag `prediction_loss_only = True` there is no error. I logged the output and it is returning all the tensors. I don't why there is a `float.detach()` error.
```
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer.py", line 1340, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer.py", line 1445, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer.py", line 1445, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/data/data_store/rabiul/codes/mumt/camtl/src/mumt_trainer.py", line 295, in evaluate
eval_dataset=eval_dataset,
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer.py", line 2058, in evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/data/data_store/rabiul/codes/mumt/camtl/src/mumt_trainer.py", line 295, in evaluate
eval_dataset=eval_dataset,
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer.py", line 2058, in evaluate
metric_key_prefix=metric_key_prefix,
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer.py", line 2223, in evaluation_loop
metric_key_prefix=metric_key_prefix,
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer.py", line 2223, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer.py", line 2451, in prediction_step
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer.py", line 2451, in prediction_step
logits = nested_detach(logits)
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 145, in nested_detach
logits = nested_detach(logits)
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 145, in nested_detach
return type(tensors)(nested_detach(t) for t in tensors)
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 145, in <genexpr>
return type(tensors)(nested_detach(t) for t in tensors)
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 145, in <genexpr>
return type(tensors)(nested_detach(t) for t in tensors)
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 146, in nested_detach
return type(tensors)(nested_detach(t) for t in tensors)
File "/student/mda219/garage/miniconda3/envs/py3gpu/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 146, in nested_detach
return tensors.detach()
AttributeError: 'float' object has no attribute 'detach'
return tensors.detach()
AttributeError: 'float' object has no attribute 'detach'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
<!-- A clear and concise description of what you would expect to happen. -->
| 09-13-2021 21:25:43 | 09-13-2021 21:25:43 | Hi,
Could you provide a code snippet that can reproduce this error? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,552 | closed | [WIP] Add TFSpeech2Text | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This adds TFSpeech2Text. I couldn't reopen the original PR because of the changes I made while the PR was closed.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patil-suraj | 09-13-2021 20:04:27 | 09-13-2021 20:04:27 | 'generation_tf_utils.py' needs to be modified to accept "input_features". So right now the tricky part is adding that ability with the least amount of changes. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I've had a really busy last couple of months so I haven't been able to work on it. I should have some time soon. All that's left is modifying 'generation_tf_utils.py' for generation with 'input_values' instead of 'input_ids'<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,551 | closed | QUESTION: How to perform 2D interpolation of pre-trained position embeddings for fine-tuning on Vision Transformers? | Hi all,
I know that in examples for fine-tuning on the pre-trained ViT, there is a ViTFeatureExtractor that can upsample images from CIFAR10 -> 224x224 to match the image resolution that that model is pre-trained on. However in the paper, the authors suggest to perform 2D Interpolation on the pre-trained position embeddings. I saw here in the TIMM source code (https://github.com/rwightman/pytorch-image-models/blob/54e90e82a5a6367d468e4f6dd5982715e4e20a72/timm/models/vision_transformer.py#L481) there is a method to handle this interpolation. Is there a similar method / parameter that I can use in the HuggingFace ViT to enable this interpolation behavior, in lieu of upsampling the CIFAR10 32x32 to match 224x224? | 09-13-2021 19:54:18 | 09-13-2021 19:54:18 | Hi,
See #12167 |
transformers | 13,550 | closed | Nightly torch ci | Setup a job that runs the entire PyTorch suite against PyTorch nightly. Runs every three days and posts the results to the usual Slack channel for easier consumption. | 09-13-2021 19:41:19 | 09-13-2021 19:41:19 | |
transformers | 13,549 | closed | Nightly ci torch | Superseded by https://github.com/huggingface/transformers/pull/13550 | 09-13-2021 19:39:28 | 09-13-2021 19:39:28 | |
transformers | 13,548 | closed | RFC: split checkpoint load/save for huge models | # 🚀 Feature request
While [discussing with pytorch devs adding the ability to load/save state_dict on the finer granularity level and not needing to manifest the whole state_dict in memory](https://github.com/pytorch/pytorch/issues/64327), we have an additional issue of the model file just being too large. I'd like to propose for `transformers` to support multi-part checkpoints.
Reasons for the need:
- the hub limitation: Cloudfront does not support >20GB files so downloads via s3 can't be fast with those large files
- the current pytorch issue loading the whole state_dict into memory and requiring 2x model size in memory - checkpoint conversion is quite demanding on memory as well for the same reason.
- in general it's a potential issue for users with imperfect up/down internet connection. uploading/downloading 25GB files is still not easy for all.
Possible solutions:
1. as mentioned [here](https://github.com/pytorch/pytorch/issues/64327), [SplitCheckpoint](https://github.com/finetuneanon/transformers/blob/ca5d90ac1965982db122a649c2c9c902bde74a03/src/transformers/modeling_utils.py#L417-L443) already implements a possible solution which saves each state_dict's key separately
2. as solution 1 but we may save groups of these - e.g. save each layer's keys together in one pickled state_dict per layer. I looked at some large models and they will have a huge amount of keys, e.g. even t5-small is ~150 keys. But this approach would be more complicated since we now need to define the container block and it'll be different from model to model. May be by sub-module? So perhaps the first solution is much more simple.
The only addition I'd propose to actually name the files with the full key name rather than obscure files like `m18.pt` as implemented by [SplitCheckpoint](https://github.com/finetuneanon/transformers/blob/ca5d90ac1965982db122a649c2c9c902bde74a03/src/transformers/modeling_utils.py#L417-L443) , and which require an extra file to do look ups.
So my proposal is:
```
config.json
merges.txt
README.md
tokenizer.json
vocab.json
pytorch_model/map.pt
pytorch_model/shared.weight.pt
pytorch_model/encoder.embed_tokens.weight.pt
[...]
pytorch_model/encoder.block.3.layer.0.SelfAttention.v.weight.pt
[...]
pytorch_model/decoder.block.5.layer.1.EncDecAttention.q.weight.pt
[...]
pytorch_model/lm_head.weight
```
and these are all raw files not belonging to any archive. and map just has the list of keys in their order for when `OrderedDict` is important.
the cost of the 1st solution is somewhat slower save/load. I haven't benchmarked, but the IO will be the bottleneck here, and the ZIP structure currently gets unravelled one tensor at a time anyway, so the difference is likely to be negligible.
other solutions are welcome.
Other examples of split checkpoints:
- Deepspeed's pipeline (PP) saves each layer as a separate checkpoint, which allows to quickly change the PP degree at run time.
Threshold:
- need to define the threshold at which we automatically switch to this multi-part format unless the user overrides the default. Probably can use the size of the model as the measurement. I think it should be 3B or even less. if model size == 3B the resulting file size are:
1. 6GB in fp16/bf16
2. 12GB in fp32.
@patrickvonplaten, @patil-suraj, @LysandreJik, @sgugger, @julien-c | 09-13-2021 18:09:24 | 09-13-2021 18:09:24 | Thanks for the write-up!
I'm happy with a multi-part checkpoint solution I think if it's clearly the cleanest solution to reduce the 2x-model-size-memory-requirement-at-loading problem. If there is a rather simple solution to solve the 2x model size memory problem while keeping a single checkpoint file I'd be in favor of this because:
- people are used to it
- it's more in line with TF and PT
- it's less complex as for large models 500+ checkpoints can really clutter the model repo
IMO, the points:
- the hub limitation: Cloudfront does not support >20GB files so downloads via s3 can't be fast with those large files
- in general it's a potential issue for users with imperfect up/down internet connection. uploading/downloading 25GB files is still not easy for all.
are not very important since when dealing with such large models, download times will take some time anyways and even a speed-up in download from 10min to 3min is no that important since the files are cached afterwards anyways IMO. Also I'm not sure if it's an advantage to being able to load layers individually since I can see people getting lost in for which layers the upload was successful / unsuccessful. <|||||>Thanks for your feedback, @patrickvonplaten
> * it's less complex as for large models 500+ checkpoints can really clutter the model repo
Have you meant checkpoints or files? The proposal is to have a separate dir so from the top level nothing has changed.
I'm with you that when there are many hundreds of params it may become somewhat hard to manage. As you flagged watching that all files got correctly uploaded and more chances for a timeout on a single file during the download. I was sitting on the fence about posting this proposal because of these possible issues and after encountering multiple problems with uploading a 26GB model I decided to go ahead with it.
So I'm hoping that perhaps some alternative solution will come from this discussion.
<|||||>How about a compromise that regroups weights in a state dict until we get to the biggest file possible still < 20GB. This way we don't clutter the repo and only split the checkpoint when necessary, while also avoiding the annoying timeouts. We can then have a load in several part to recover the whole weights. It would require a bit of refactor in `from_pretrained` but not too much, the bulk of the work would be in `save_pretrained` to split the checkpoint in several parts when needed.<|||||>Well, the idea is to make models a bit lighter in size, and 20GB is still a huge size! Though your suggestion is sensible.
But to put the issue in perspective consider our BigScience's plan to train a 200B model in a few months. That means the model will be 400GB in fp16, 800GB in fp32.
One of the possible approaches there would be to split it by layer, each of which would still be huge. so there perhaps separate param per file would be just right.
<|||||>
BTW, there was another related discussion of compressing the models: https://github.com/huggingface/transformers/issues/10008<|||||>And here is a non-completed PR from @anthon-l, where the checkpoint is 19GB https://github.com/huggingface/transformers/pull/10301
I guess t5-11b is the only other one that's very large on the hub - 45GB in size, or do we have more examples?
<|||||>Think the biggest models we have on the hub are the 45GB T5 models<|||||>if we have more and more large models, for instance the ones from Bigscience, i think we need to split checkpoints indeed.
> download times will take some time anyways and even a speed-up in download from 10min to 3min is no that important
(quote from @patrickvonplaten)
Depending on where you (or your server) are in the world and your kind of connection, the delta can be way bigger – 10x to 100x download speed is likely in a lot of cases in my opinion.
If we end up having 400GB artefacts in the hub, I think strictly no-one will be able to load them if we stick to single file (which, as @stas00 mentioned, _implies_ download from `us-east` S3)
More precisely, the speed difference between Cloudfront and S3 increases with how "far" (in networking topology) you are from the AWS `us-east` region.<|||||>So the BigScience 176B model is training and the bf16 checkpoint is 329GB.
So please let's resume this discussion and bring it to completion soon as we need to start working on converting the checkpoints since the group desires to already start doing inference on an even incompletely trained model.
While at it let's also incorporate the discussion of `torch.load` due to the `pickle` security issues.
Thank you!
@patrickvonplaten, @patil-suraj, @LysandreJik, @sgugger, @julien-c, @thomwolf <|||||>also @Narsil<|||||>And @julien-c's comment to remind:
> shards should be in the 5-30GB range if possible (in all case, always smaller than 30GB)
(that was changed recently from 20GB upper limit)<|||||>Here is the breakdown of the 176B model's layers in BF16 weights:
```
$ NHIDDEN=14336; NLAYERS=70; SEQ_LEN=2048; VOCAB_SIZE=250680; python -c "h=$NHIDDEN; l=$NLAYERS; \
s=$SEQ_LEN; v=$VOCAB_SIZE; t=2*(12*h**2 + 13*h); r=2*(v*h + s*h + 2*h); \
print(f'BF16 Transformer block size: {t/2**30:.02f}GB, the rest is: {r/2**30:.02f}GB, Total {(l*t+r)/2**30:.02f}GB')"
BF16 Transformer block size: 4.59GB, the rest is: 6.75GB, Total 328.34GB
```
So basically, save each layer separately to the tune of 4.59GB per checkpoint - 70 of those, one additional checkpoint with the rest of the weights of 6.75GB.
A total of 71 shards.
This is very close to how Megatron-Deepspeed's pipeline shards are already saved, Except it saves each layer in a separate file, perhaps we should follow suite, whereas I dumped all the non-transformer layers into one in the breakdown above. It makes it very easy to load one layer at a time following the model's layers one by one.<|||||>While working out the design it's good to look at the concrete model, but also to consider smaller and bigger models, as those surely are coming. That is to validate that one file per layer is a good solution, if you scroll back up to the beginning of this discussion this was my original proposal.<|||||>I'm okay with a system that will group weights until we get to the max < 5GB (or any number) then creates a new shard automatically for the next weights. Saving each layer separately has several drawbacks:
- you can't decide to do it automatically, as for smaller models it doesn't necessarily make sense
- it will result in multiple calls to fetch every file (even when they are downloaded, we will still call the HF API to see if there is an update for each of the files since it's in the core of the system in Transformers)
so I'd avoid that solution.
The `save_pretrained` method should then probably do this by default (so smaller models would be saved as before and large models would be automatically split) even if it's a breaking change, as it interfaces better with the Hub. We would of course have an argument to save the model as a whole for users that want to reuse their large model directly, without using `from_pretrained`.
The main problem when reloading is then going to be how to handle the calls to the Hub to know when `from_pretrained` should ask for one file `pytorch_model.bin` or several (probably named `pytorch_model_xxx.bin`. Since we can't call the API that returns the list of model files as it's not optimized (feel free to correct me @julien-c), we should have a config attribute that tells us when there are multiple checkpoints so we know the list of files we have to download.
I think the discussion around `torch.load` is orthogonal to this discussion (at least the way everything is written right now in Transformers) so I would leave it for a different issue.
<|||||>The problem of "has-it-changed" can be solved by saving an additional file which is the index to all other files, which could also provide a map of how to load the rest of the files. i.e. it'll tell which specific weights are located in which file. Since your proposal of `pytorch_model_xxx.bin` is somewhat difficult on its own since there is no indication of what's inside `pytorch_model_xxx.bin`
If there is just one index file to check, the hub functionality can check if there is `pytorch_model.bin` (single part model) or ` pytorch_model_index.bin` (multi-part model) - so only 2 files to check.<|||||>The problem of how to slice can be solved by a threshold setting at which the model starts sharding - probably 5GB is a good choice, since downloading more than 5GB can be an issue for those w/o bad connectivity.
And of course the math would be different for a model with bf16/fp16 weights vs fp32 weights as the latter will use 2x bytes. So the grouping should then happen by `params*dtype_bytes < gb_threshold`
So as long as the model is smaller than 5GB, it will remain a single file always.
But I'm mostly repeating Sylvain here, just added the component of dtype size (2 vs 4).<|||||>The idea of an index that provides a map is very good, I like it!<|||||>Should the threshold be configurable by the user with the default set to what we prefer for the hub?
I'm asking since there will be users who will not use the hub and will perhaps want a 100GB model file.
While we are at it, does the hub uploader check that the file is not bigger than 20GB before attempting the upload?<|||||>Adding my two cents:
- I agree that threshold should be configurable. We implemented a similar threshold in the `datasets`' `push_to_hub` [method](https://github.com/huggingface/datasets/pull/3098/files#diff-247ce37d2f96c4fc8e77ce55dc9922675a5f7f561f8df0cbc563d13048c59abdR3319) and it should provide the most flexibility. Having 5GB as a default also sounds good to me.
- I don't think the hub checks that the file is bigger than 20GB. Ideally we'd send out a very visible warning when a checkpoint is saved with larger shards than 20GB.
- In `datasets`, for a file split over three shards, we use the following format:
- `<filename>-00000-of-00003.<extension>`
- `<filename>-00001-of-00003.<extension>`
- `<filename>-00002-of-00003.<extension>`
This naming can be adopted if each individual shard are valid files in the `<extension>` format. If we split semantically (on weights) rather than on bytes, this naming makes sense, otherwise, the extension would be moved inside: `<filename>.<extension>-00000-of-00003` .
If there are no other constraints on the naming, I'd vote to have a similar format, for both consistency's sake and to be able to quickly identify which files should be replaced in case of a `push_to_hub` that would need to overwrite a previously uploaded checkpoint, with a possibly differing number of shards.<|||||>And my 2 cents.
Can the config actually hold the location of its various weight files (relative to the repo) in something like `config.weight_files` which could default to `["pytorch_model.bin"]` and `["tf_model.h5"]` depending on the Arch.. ?
I think the config has to be downloaded for any model download so it could become the index itself no ? And users can choose whichever splitting they seem is more appropriate ?
Having an index is very important IMO to allow resharding existing repos (it does make them inaccessible to the old versions of the library that don't have the sharding logic though)<|||||>(nit) `pytorch_model.bin.index` is a better filename than `pytorch_model_index.bin` in my opinion (this scheme could also apply to other files and will be very easy to check on the hub side)<|||||>> * If we split semantically (on weights) rather than on bytes, this naming makes sense, otherwise, the extension would be moved inside: `<filename>.<extension>-00000-of-00003` .
absolutely by layer, and not bytes, which makes it possible to load parts of the model directly on different gpus. e.g. think Pipeline which will load for example file 0+1 on gpu 0, file 2+3 on gpu 1, etc.<|||||>> Can the config actually hold the location of its various weight files
that's exactly what we have been discussing - there will be an index file (not config) which will map weights to files, which of course then allows you to point to whatever file you want, including a different format.<|||||>> which will map weights to files,
Why do you need that ? If the files are in a nice format, looking up their weights names should be extremely fast (and never load the tensors). The main issue I see with an external index containing weight_names is the potential for it being outdated/inconsistent with what's actually within the other files. Recovering from the inconsistent state is going to add more code logic.
If we keep the index in the config (withtout weight names) then, at most a file is missing which is sort of the same issue as `pytorch_model.bin` missing.
Then we can just recreate the weight_name index when looking into the files and so there's no option for it being inconsistent because we don't commit it to disk.
Even in pure zip/pickle from PT I think it's possible to recreate that index on the fly with very little overhead but I will have to check.
<|||||>The index files is needed for the following reasons:
1. only one file to look up for is-modified check - to see if the model needs to be re-downloaded - I don't see any reason why the code that generates the index file and the shards should lead to out of sync situation. It can also include md5 checksums to able to always verify that the files are in sync, which we probably need to do anyway.
2. in the case of pipeline parallelism it allows one to load just the weights each pipeline stage needs and this can't be derived from a file name and will require loading all files for each stage, rather than one file. It'd be different if each layer had its own file, then it'd be self-naming - but it looks like the proposal so far is that we are going to pack multiple layers together for smaller models. The Deepspeed pipeline saves each layer in its own file named after that layer id and thus avoids the need for an index file.
BTW, git LFS already provides us with checksums: e.g., https://huggingface.co/gpt2/blob/main/pytorch_model.bin
```
SHA256: 7c5d3f4b8b76583b422fcb9189ad6c89d5d97a094541ce8932dce3ecabde1421
```
so it should be instant to validate that they match w/o actually running a slow hashing program.
So download the index file - compare the hash of LFS files it points to - verification done.<|||||>> only one file to look up for is-modified check - to see if the model needs to be re-downloaded - I don't see any reason why the code that generates the index file and the shards should lead to out of sync situation. It can also include md5 checksums to able to always verify that the files are in sync, which we probably need to do anyway.
You cannot protect from users doing things without you expecting it, like uploading single files instead of using the API we designed and therefore breaking consistency. My experience with any system, is that if it CAN be out of sync, it will end up out of sync. And recovering is usually much harder than preventing it altogether.
> So download the index file - compare the hash of LFS files it points to - verification done.
This means that the files look coherent, it doesn't mean that weight `X` is actually within file `Y`.
> In the case of pipeline parallelism it allows one to load just the weights each pipeline stage needs
Checking the weight names is very fast:
```
Found 148 tensors, took 0:00:00.088987 on file of len 474.9MiB
Found 436 tensors, took 0:00:00.107209 on file of len 2.9GiB
```
(Corresponding code at the end)
Downloading the files selectively still requires careful sharding on your end (if the sharding is poorly done, and some layers are spread across multiple files, then you still need to download everything).
This is very reasonable, and enabling downloading only what's necessary seems like a great feature for such large models.
Having no weight_index and good internal sharding could enable you as a user to do that (`shard1.h5`, `shard2.h5`, `common_weights.h5` maybe ?).
What happens if the sharding changes based on the hardware your are running ? (Like moving from 8 A100 to 64 G4 maybe).
Are layers even the good amount of sharding in case of Deepspeed parallelism you can have tensor being sharded themselves no ?
Just asking questions to understand the requirements better for enabling downloading only what's necessary, it seems more is at play than pure file sharding.
Btw, using something like `h5` could enable streaming only necessary parts of even a large file probably (but does require logic and a server, I would need to check but maybe HTTP ranges is doable, not sure how that plays with Cloudfront cache though)
The other idea I have is if that index could be somehow inferred by the hub itself, it would save quite a bit of headaches of inconsistent files
@julien-c Do you think deriving that index per weight file automatically on the hub is doable ?
```python
from huggingface_hub import hf_hub_download
import datetime
import h5py
import os
def _load(f, tensors, prefix=""):
for k in f.keys():
if isinstance(f[k], h5py._hl.dataset.Dataset):
tensors[f"{prefix}_{k}"] = f[k].shape
else:
tensors.update(_load(f[k], tensors, prefix=f"{prefix}_{k}"))
return tensors
def filelen(filename, suffix="B"):
num = os.path.getsize(filename)
for unit in ["", "Ki", "Mi", "Gi", "Ti", "Pi", "Ei", "Zi"]:
if abs(num) < 1024.0:
return f"{num:3.1f}{unit}{suffix}"
num /= 1024.0
return f"{num:.1f}Yi{suffix}"
def check(filename):
start = datetime.datetime.now()
shapes_gpt2 = {}
with h5py.File(filename, "r") as f:
_load(f, shapes_gpt2)
print(
f"Found {len(shapes_gpt2)} tensors, took {datetime.datetime.now() - start} on file of len {filelen(filename)}"
)
def main():
filename = hf_hub_download("gpt2", filename="tf_model.h5")
filename_large = hf_hub_download("gpt2-large", filename="tf_model.h5")
check(filename)
check(filename_large)
if __name__ == "__main__":
main()
```
<|||||>Jumping up because I might be helping to implement this while adding the 176B parameters model
Note that there are 2 notions of sharding here:
- sharding on the hub: goal is to keep all the files at less than 5-20GB to get cloudfront/fast-downloads
- sharding the model locally: goal is to be able to load the model when there are less CPU memory than the size of the model for instance (which happens currently for the 176B model on Jean Zay) - this could be related to ZeRO3 at some point but probably we should not depend on DeepSpeed save file format and be more flexible to accomodate TF/JAX/etc
A threshold of 5-10GB per file should allow both sharding limit. On my side I would stay quite open in how the weights are spread among files (don't force per layer) and use a mapping file as mentioned above.
So we could go toward this organization I guess inspired by @stas00 first proposal:
```
README.md
config.json
[tokenizer files....]
pytorch_model.bin.index => pickle dictionnaire mapping stat_dict keys to file name in pytorch_model folder
pytorch_model/pytorch_model-00000-of-00003.pt => pickle of dict of weights
pytorch_model/pytorch_model-00001-of-00003.pt => pickle of dict of weights
pytorch_model/pytorch_model-00002-of-00003.pt => pickle of dict of weights
```
The index mapping file `pytorch_model.bin.index` could also be a human readable file (JSON for instance) by the way. Not sure we gain much by having it stored in binary.<|||||>> The index mapping file pytorch_model.bin.index could also be a human readable file (JSON for instance) by the way. Not sure we gain much by having it stored in binary.
Plus, we're trying to stay away from pickle those days :-)<|||||>Just replying on this for now:
> HTTP ranges is doable, not sure how that plays with Cloudfront cache though
yes Cloudfront supports range requests 👍 <|||||>Are we considering my proposal to completely part ways from torch checkpoint and using a tied dbm file instead? https://github.com/pytorch/pytorch/issues/64327
This makes the checkpoint manipulation completely transparent and removes all the custom framework specific formatting and allows for a single checkpoint that could be loaded by any framework.
Now you can load or save any layer or even a single weight w/o caring about how it's implemented underneath.
So in this case it'll just use multiple dbm files to handle shards.
The proposal includes a fully working code prototype and would just require logic to handle sharding.<|||||>#16343 has been merged so I believe this may be closed?<|||||>Just posting a small script I used to shard any model: https://gist.github.com/younesbelkada/382016361580b939a87edcddc94c6593 people may want to use it in the future to push sharded models ! |
transformers | 13,547 | closed | train loss is not decreasing using TFBertModel | I have used the TFBertModel and AutoModel from the transformer library for training a two-class classification task and the training loss is not decreasing.
`bert = TFBertModel.from_pretrained('bert-base-uncased')`
`input_ids = tf.keras.layers.Input(shape=(SEQ_LEN,), name='input_ids', dtype='int32')`
`mask = tf.keras.layers.Input(shape=(SEQ_LEN,), name='attention_mask', dtype='int32')`
`embeddings = bert(input_ids, attention_mask=mask)[1]`
`X = tf.keras.layers.Dropout(0.1)(embeddings)`
`X = tf.keras.layers.Dense(128, activation='relu')(X)`
`y = tf.keras.layers.Dense(1, activation='sigmoid', name='outputs')(X)`
`bert_model = tf.keras.Model(inputs=[input_ids, mask], outputs=y)`
`bert_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])`
But when I use the TFBertForSequenceClassification model the model converges fast and the training loss reaches zero.
`bert_model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased',num_labels=2)`
`loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)`
`metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')`
`optimizer = tf.keras.optimizers.Adam(learning_rate=2e-5,epsilon=1e-08)`
`bert_model.compile(loss=loss, optimizer=optimizer, metrics=[metric])`
I want to use the sequence output of BERT and hence I need to load the model with TFBertModel or something similar which returns the outputs of BERT.
@Rocketknight1 | 09-13-2021 15:17:22 | 09-13-2021 15:17:22 | Hi,
For training-related questions, please refer to the [forum](https://discuss.huggingface.co/). We like to keep Github issues for bugs/feature requests.
Closing this for now.
Thank you! |
transformers | 13,546 | closed | RAG issue | ## Environment info.
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.8.2
- Platform:Windows
- Python version:3.7.10
- PyTorch version (GPU?):CPU version 1.9.0
- Tensorflow version (GPU?):
- Using GPU in script?:no
- Using distributed or parallel set-up in script?:
Models:
- RAG
facebook/rag-sequence-nq
I am trying to use the retriever process of RAG mentioned in https://huggingface.co/transformers/model_doc/rag.html
I have created my own knowledge embedding using the process provided in the documentation. The process of retrieval works perfectly in transformer version transformers =4.6.1 but fails in the version 4.8.2
Issue is in the step:-tokenizer = RagTokenizer.from_pretrained(r"C:\My_Projects\code-repo\retriever\models") ##local dir hosting the model facebook/rag-sequence-nq
ValueError: unable to parse C:\My_Projects\code-repo\retriever\models\tokenizer_config.json as a URL or as a local path.
Any help on the same would be appreciated as I have a feeling its a version issue. Do we need to implement it differently in the latest version.
| 09-13-2021 14:28:26 | 09-13-2021 14:28:26 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,545 | closed | Load BERT as DPRQuestionEncoder using from_pretrained method | ## Environment info
```
- `transformers` version: 4.10.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
## To reproduce
```
from transformers import DPRQuestionEncoder
model = DPRQuestionEncoder.from_pretrained('bert-base-uncased')
```
# Error
```
You are using a model of type bert to instantiate a model of type dpr. This is not supported for all configurations of models and can yield errors.
NotImplementedErrorTraceback (most recent call last)
<ipython-input-36-1f1b990b906b> in <module>
----> 1 model = DPRQuestionEncoder.from_pretrained(model_name)
2 # https://github.com/huggingface/transformers/blob/41cd52a768a222a13da0c6aaae877a92fc6c783c/src/transformers/models/dpr/modeling_dpr.py#L520
/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1211 )
1212
-> 1213 model, missing_keys, unexpected_keys, error_msgs = cls._load_state_dict_into_model(
1214 model, state_dict, pretrained_model_name_or_path, _fast_init=_fast_init
1215 )
/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py in _load_state_dict_into_model(cls, model, state_dict, pretrained_model_name_or_path, _fast_init)
1286 )
1287 for module in unintialized_modules:
-> 1288 model._init_weights(module)
1289
1290 # copy state_dict so _load_from_state_dict can modify it
/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py in _init_weights(self, module)
515 Initialize the weights. This method should be overridden by derived class.
516 """
--> 517 raise NotImplementedError(f"Make sure `_init_weigths` is implemented for {self.__class__}")
518
519 def tie_weights(self):
NotImplementedError: Make sure `_init_weigths` is implemented for <class 'transformers.models.dpr.modeling_dpr.DPRQuestionEncoder'>
```
## Expected behavior
Load correctly the weights of the model.
| 09-13-2021 12:56:02 | 09-13-2021 12:56:02 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I believe this should be fixed on `master` now!<|||||>hi,how to load correctly? I still have this problem with transformers==4.10.2.
I have tried with transformers==4.12.5. Although there are no errors, most parameters are not correctly initialized.
You are using a model of type bert to instantiate a model of type dpr. This is not supported for all configurations of models and can yield errors.
Some weights of the model checkpoint at bert-base-uncased were not used when initializing DPRQuestionEncoder: ['bert.encoder.layer.0.attention.self.value.bias',
|
transformers | 13,544 | closed | Can i covert a MarianMT Model to tensorflow lite model? |
I'd liked to run MarianMT model on tensorflow lite, how to covert the en-zh model to a .pb model or a .tflight model?
| 09-13-2021 11:36:40 | 09-13-2021 11:36:40 | i have tried to covert the .h5 to .pb, but there is something weird.
My tensorflow version is 2.6.0.
this is my dir info:
(venv) ➜ Documents tree opus-mt-en-zh
opus-mt-en-zh
├── README.md
├── config.json
├── flax_model.msgpack
├── metadata.json
├── pytorch_model.bin
├── rust_model.ot
├── source.spm
├── target.spm
├── tf_model.h5
├── tokenizer_config.json
└── vocab.json
when i load the model with tensorflow, it crashed:
```
import tensorflow as tf
model = tf.keras.models.load_model("./opus-mt-en-zh",compile=False)
```
Traceback (most recent call last):
File "translate.py", line 27, in <module>
model = tf.keras.models.load_model("./opus-mt-en-zh",compile=False)
File "/Users/xuanyue/venv/lib/python3.8/site-packages/keras/saving/save.py", line 205, in load_model
return saved_model_load.load(filepath, compile, options)
File "/Users/xuanyue/venv/lib/python3.8/site-packages/keras/saving/saved_model/load.py", line 108, in load
meta_graph_def = tf.__internal__.saved_model.parse_saved_model(path).meta_graphs[0]
File "/Users/xuanyue/venv/lib/python3.8/site-packages/tensorflow/python/saved_model/loader_impl.py", line 118, in parse_saved_model
raise IOError(
OSError: SavedModel file does not exist at: ./opus-mt-en-zh/{saved_model.pbtxt|saved_model.pb}
i also tried this:
```
import tensorflow as tf
from transformers import MarianMTModel, MarianTokenizer
print("loading model...")
model_name = 'Helsinki-NLP/opus-mt-en-zh'
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
print("finished loading model")
converter = tf.lite.TFLiteConverter.from_keras_model(model);
tflite_model = converter.convert();
```
but it died with:
(venv) ➜ Documents python translate.py
loading model...
finished loading model
Traceback (most recent call last):
File "translate.py", line 10, in <module>
tflite_model = converter.convert();
File "/Users/xuanyue/venv/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 729, in wrapper
return self._convert_and_export_metrics(convert_func, *args, **kwargs)
File "/Users/xuanyue/venv/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 715, in _convert_and_export_metrics
result = convert_func(self, *args, **kwargs)
File "/Users/xuanyue/venv/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 1123, in convert
self._freeze_keras_model())
File "/Users/xuanyue/venv/lib/python3.8/site-packages/tensorflow/lite/python/convert_phase.py", line 218, in wrapper
raise error from None # Re-throws the exception.
File "/Users/xuanyue/venv/lib/python3.8/site-packages/tensorflow/lite/python/convert_phase.py", line 208, in wrapper
return func(*args, **kwargs)
File "/Users/xuanyue/venv/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 1066, in _freeze_keras_model
if not isinstance(self._keras_model.call, _def_function.Function):
File "/Users/xuanyue/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'MarianMTModel' object has no attribute 'call'
Any suggestions?
<|||||>Maybe @Rocketknight1 has some experience with TF Lite<|||||>Hi! In general, we don't support conversions to TFLite - it has a very limited set of operations and that makes it hard to automatically convert a lot of models.
You can try this yourself if you want, but we probably can't offer detailed assistance. The bug you encountered, however, is most likely caused by the fact that you used `MarianMTModel`, which is the Pytorch model instead of `TFMarianMTModel`. If you try converting the `TFMarianMTModel` it might work, but I suspect there will still be other problems!<|||||>Sorry to tag @Rocketknight1 in this again - but I had another question on this topic: I was able to get a rudimentary yet successful quantized conversion of the TFMarianMTModel ( Sample script provided [here](https://colab.research.google.com/drive/1Vsinmju5FsbiEtfD9BqBHWeKxM3LES-U?usp=sharing)) for tflite. I was even able to ( not mentioned in this script ) generate nearly identical encoder outputs with the tflite interpreter on edge - but I am uncertain of how to invoke the decoder for the decoding process.
I am not sure if there is any way I could even port this model as a whole ( since the decoder will have to be invoked repeatedly for the decoding process on edge ).
More specifically - Can you please give any tips on how I can go about doing any of the following ?
a. Is there any way I could use [Tensorflow Signatures](https://www.tensorflow.org/lite/guide/signatures#:~:text=Signatures%20can%20be%20specified%20when%20building%20a%20SavedModel,TensorFlow%20Lite%20model%20to%20support%20multiple%20entry%20points.) to separately invoke the interpreter for the encoder and decoder ?
b. Is there any way I can port the TFMarianMTModels encoder and decoder separately ? ( so that I may invoke them as two different graphs )
Thanks in advance for any help. <|||||>Hello @harshitadd,
I'll try to help. I see you converted like this:
```
print('Converting Model to tflite')
converter = tf.lite.TFLiteConverter.from_saved_model('config_model')
print('Optimizing for Size Reduction')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
```
Can you try the code snippets below? Call the encoder and decoder inferences under separate functions decorated under @tf.function() when you're defining the Model (as encode and decode), if you want to do it with signatures. Is your question more about how to separate an encoder decoder model and put them separately?
```
model = Model()
tf.saved_model.save(
model, SAVED_MODEL_PATH,
signatures={
'encode': model.encode.get_concrete_function(),
'decode': model.decode.get_concrete_function()
})
converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL_PATH)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # TensorFlow ops.
]
tflite_model = converter.convert()
```
> Sorry to tag @Rocketknight1 in this again - but I had another question on this topic: I was able to get a rudimentary yet successful quantized conversion of the TFMarianMTModel ( Sample script provided [here](https://colab.research.google.com/drive/1Vsinmju5FsbiEtfD9BqBHWeKxM3LES-U?usp=sharing)) for tflite. I was even able to ( not mentioned in this script ) generate nearly identical encoder outputs with the tflite interpreter on edge - but I am uncertain of how to invoke the decoder for the decoding process.
>
> I am not sure if there is any way I could even port this model as a whole ( since the decoder will have to be invoked repeatedly for the decoding process on edge ).
> More specifically - Can you please give any tips on how I can go about doing any of the following ?
>
> a. Is there any way I could use [Tensorflow Signatures](https://www.tensorflow.org/lite/guide/signatures#:~:text=Signatures%20can%20be%20specified%20when%20building%20a%20SavedModel,TensorFlow%20Lite%20model%20to%20support%20multiple%20entry%20points.) to separately invoke the interpreter for the encoder and decoder ?
>
> b. Is there any way I can port the TFMarianMTModels encoder and decoder separately ? ( so that I may invoke them as two different graphs )
>
> Thanks in advance for any help.
<|||||>Thanks for the help @merveenoyan; I was able to port the encoder/decoder separately; Using signatures just seemed like a more elegant solution; Thanks for your help! |
transformers | 13,543 | closed | [Feature Extractors] Return attention mask always in int32 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes:
```
tests/test_modeling_tf_hubert.py::TFHubertModelIntegrationTest::test_inference_ctc_robust_batched
tests/test_modeling_tf_wav2vec2.py::TFWav2Vec2ModelIntegrationTest::test_inference_ctc_robust_batched
```
For some specific use cases the attention mask for feature extractors was returned to be of type `bool` which broke two tf slow tests. Make sure that it's always `int32` or `long` just like the tokenizers do for text.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-13-2021 11:06:32 | 09-13-2021 11:06:32 | LGTM! I think lots of stuff breaks with boolean attention masks, so we definitely want to avoid sending that to the model. |
transformers | 13,542 | closed | Add checks to build cleaner model cards | # What does this PR do?
To avoid model cards being rejected by the metadata validation on the Hub, this PR cleans up a few things:
- removing None values from lists so that a [None] passed for a field is ignored (this fixes #13528 )
- ignoring results that don't have the three keys task, datasat and metrics since those three are mandatory.
| 09-13-2021 10:59:23 | 09-13-2021 10:59:23 | |
transformers | 13,541 | closed | Small changes in `perplexity.rst`to make the notebook executable on google collaboratory | # What does this PR do?
Small changes in `perplexity.rst`to make the notebook executable on google collaboratory.
Replace this [PR](https://github.com/huggingface/notebooks/pull/85) in the `notebooks` repository.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. @sgugger, I would love to have your thoughs on this PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-13-2021 10:38:26 | 09-13-2021 10:38:26 | I take the opportunity to point out that a change was already made in this file and not in the corresponding file into `notebooks` repository: `from nlp import load_dataset` -> `from datasets import load_dataset`.
Whereas when we press the `open in Colab` button in the [documentation](https://huggingface.co/transformers/perplexity.html), we get the notebook stored into the `notebooks` repository. <|||||>> Thanks for fixing! Will upload the updated notebooks after this is merged.
With pleasure it was really a small change! To understand the workflow, what we have to do is edit the files in the `docs/source` folder, generate automatically the notebooks from these files and then upload manually the modified notebooks in the `notebooks` repository? :blush: <|||||>Yes, the exact workflow is to have an up to do date repo of transformers in the same folder has the notebooks repo, then in the notebooks repo, make sure we are up to date with master, run `make doc-notebooks`, commit and push the changes. |
transformers | 13,540 | closed | fixing BC in `fill-mask` (wasn't tested in theses test suites apparently). | # What does this PR do?
Should superseed https://github.com/huggingface/transformers/pull/13537
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 09-13-2021 10:31:09 | 09-13-2021 10:31:09 | |
transformers | 13,539 | closed | [SequenceFeatureExtraction] Move padding logic from pure python to numpy | Currently the padding for sequential feature extraction: https://github.com/huggingface/transformers/blob/9d60eebeb52ed3c266ab8e0cc6871ebeb08a5bc1/src/transformers/feature_extraction_sequence_utils.py#L38 is done in pure Python even though the two feature extractors that rely on sequential both always return either a single numpy (or PT or TF) tensor or a list of numpy tensors - see:
- https://github.com/huggingface/transformers/blob/9d60eebeb52ed3c266ab8e0cc6871ebeb08a5bc1/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py#L205 for Wav2Vec2
- https://github.com/huggingface/transformers/blob/9d60eebeb52ed3c266ab8e0cc6871ebeb08a5bc1/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py#L209
More specifically:
1) `feature_extractor(raw_audio, return_tensors=None, padding=True)` => returns a list of numpy arrays
2) `feature_extractor(raw_audio, return_tensors="np", padding=True)` => returns a 2D numpy array (PT and TF return a 2D array in PT and TF correspondingly)
For some background the text tokenizers all have the option to return "pure" Python lists which was then implemented analogous for the speech feature extractors. However it was decided that the each individual sequence will always have be a numpy array. This is because a very common use case is to load audio files via `torchaudio` or `librosa` which return a list of numpy arrays and then run the feature extractor on it. Running `feature_extractor(list_of_numpy_arrays, padding=True)` should then not change the inner numpy arrays to pure python as this would be confusing for the user and also very be very slow.
Therefore I think the current design is good as is and that we can move the padding logic in https://github.com/huggingface/transformers/blob/9d60eebeb52ed3c266ab8e0cc6871ebeb08a5bc1/src/transformers/feature_extraction_sequence_utils.py#L38 from "pure" Python to a numpified approach.. This would both improve speed and prevent instability issues like: https://github.com/huggingface/transformers/pull/13538 . This won't change anything in the user-facing API since at the moment the the "inner" output is always a numpy vector anyways.
Do you agree on this @LysandreJik @sgugger @anton-l @patil-suraj ?
@anton-l - are you maybe interested in giving this PR a try? | 09-13-2021 10:02:47 | 09-13-2021 10:02:47 | I agree with this proposition.<|||||>Yeah, I'll take this one, thanks for the pointers @patrickvonplaten :+1: <|||||>Very much agree! +1<|||||>This sounds great! |
transformers | 13,538 | closed | [Speech2Text] Give feature extraction higher tolerance | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes flaky test: tests/test_feature_extraction_speech_to_text.py::Speech2TextFeatureExtractionTest::test_cepstral_mean_and_variance_normalization_np
@patil-suraj @anton-l - I looked quite a bit into it and IMO it's not a bug in the computation, but due to differences in casting "pure python" to "numpy".
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-13-2021 09:54:46 | 09-13-2021 09:54:46 | Merging this for now to keep the CI green - ran the test in loop 50 times locally without failure. |
transformers | 13,537 | closed | [Bart] fix slow bart mask-infilling tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes slow tests GPU Bart: https://github.com/huggingface/transformers/runs/3577103064?check_suite_focus=true#step:7:25556
```
tests/test_modeling_bart.py::BartModelIntegrationTests::test_base_mask_filling
tests/test_modeling_bart.py::BartModelIntegrationTests::test_large_mask_filling
```
cc @Narsil guess the output format has slightly changed?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-13-2021 09:43:29 | 09-13-2021 09:43:29 | Seems odd that the output type has changed, it shouldn't have changed. The PR refactor was supossed to be with zero regression.
I had no idea that there were tests of pipelines on GPU outside the test_pipelines_* location.
<|||||>I think it's because we didn't have a test for single item list before, hence the backward incompatiblity, we need to override the `__call__` method of the `fill-mask` pipeline instead to behave oddly (as it was before)
Can we add other tests here too for GPU I think it's great to have performance checks.
Also the pipeline tests do not allow for GPU setting (anymore, and only some seem to have had the option), so that might explain it.<|||||>@Narsil - I don't follow here 100%. The tests were not just failing on GPU but also on CPU. What additional tests for pipelines should we add in your opinion exactly and where?
IMO, we should **not** continue the practice of adding pipeline tests in model testing files such as `test_modeling_bart.py` (it's a very old test if I remember correctly). We could add more slow tests to the individual task specific tests, such as `test_pipelines_fill_mask.py`, but I think it'd be better to do this is a future PR.<|||||>Superseeded by : https://github.com/huggingface/transformers/pull/13540 |
transformers | 13,536 | closed | [Speech2Text2] Skip newly added tokenizer test | The PR: https://github.com/huggingface/transformers/commit/3dd538c4d37248961d4cf99f4c07e8a5fe54984c added a new tokenizer test that should have been skipped for Speech2Text2 tokenizer. The Speech2Text2 tokenizers are yet to be added as explained in https://github.com/huggingface/transformers/pull/13186 | 09-13-2021 09:18:31 | 09-13-2021 09:18:31 | |
transformers | 13,535 | closed | Fix attention mask size checking for CLIP | # What does this PR do?
Fix a small error in attention mask size checking of CLIP.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
## Who can review?
@patil-suraj | 09-13-2021 07:55:57 | 09-13-2021 07:55:57 | |
transformers | 13,534 | closed | Package transformers.onnx should handle Tensorflow | # 🚀 Feature request
The package transformers.onnx should allow the conversion of tensorflow models as well as Pytorch's.
## Motivation
[This documentation](https://huggingface.co/transformers/serialization.html) features the following code snippet
```
python -m transformers.onnx --help
usage: Hugging Face ONNX Exporter tool [-h] -m MODEL -f {pytorch} [--features {default}] [--opset OPSET] [--atol ATOL] output
positional arguments:
output Path indicating where to store generated ONNX model.
optional arguments:
-h, --help show this help message and exit
-m MODEL, --model MODEL
Model's name of path on disk to load.
--features {default} Export the model with some additional features.
--opset OPSET ONNX opset version to export the model with (default 12).
--atol ATOL Absolute difference tolerance when validating the model.
```
which introduces the new package transformers.onnx. It also says: "This conversion is handled with the PyTorch version of models - it, therefore, requires PyTorch to be installed. If you would like to be able to convert from TensorFlow, please let us know by opening an issue." so here I am, opening an issue :)
I have been converting Tensorflow models with ONNX for a while but I am not observing the same time-saving as expected and described by various references, mainly using Pytorch though (like this one: https://medium.com/microsoftazure/accelerate-your-nlp-pipelines-using-hugging-face-transformers-and-onnx-runtime-2443578f4333). Therefore, I am eager to try this new package for my conversions but I cannot currently because we use Tensorflow.
## Your contribution
I would be very keen to contribute to this development but as it would be my first contribution to transformers I would very much appreciate any guidance on how to do it and what to read!
Thank you :))
| 09-12-2021 19:32:39 | 09-12-2021 19:32:39 | Thank you for the feature request! We'll prevent the stale bot from closing that PR and see if there is a lot of interest in this addition.<|||||>Closing this since PR #13831 implemented the feature :) |
transformers | 13,533 | closed | Joint Sequence and Token Classification | # 🚀 Feature request
A common NLU task for voice assistants and chatbots is intent classification and slot filling. This can be modeled as a joint task via a model like [BERT](https://arxiv.org/abs/1902.10909). The implementation is essentially a token classification head along with a sequence classification head attached to a pre-trained encoder.
## Motivation
I think this could be a good addition because it would allow the upload of joint slot/intent models and allow a common NLU task to be added to pipelines. Furthermore, this would enable audio encoder models like Wav2Vec2 to be used for End-to-End SLU tasks like [this](https://arxiv.org/abs/1904.03670).
## Your contribution
I would be open to contributing the implementation to BERT and uploading pre-trained NLU/SLU models on mainstream benchmarks like [snips](https://arxiv.org/abs/1805.10190v3) if other members of the community would benefit.
| 09-12-2021 19:17:23 | 09-12-2021 19:17:23 | Hello @will-rice, we're currently working on a feature to enable loading sharing and loading custom architectures like the one you mention cc @sgugger.
We'll let you know as soon as it's ready for testing so that you may take a look and let us know if it's helpful. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.