url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/9530
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9530/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9530/comments
https://api.github.com/repos/huggingface/transformers/issues/9530/events
https://github.com/huggingface/transformers/issues/9530
784,045,049
MDU6SXNzdWU3ODQwNDUwNDk=
9,530
Data format for TFTrainer for TFGpt2
{ "login": "kiyoungkim1", "id": 37245002, "node_id": "MDQ6VXNlcjM3MjQ1MDAy", "avatar_url": "https://avatars.githubusercontent.com/u/37245002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kiyoungkim1", "html_url": "https://github.com/kiyoungkim1", "followers_url": "https://api.github.com/users/kiyoungkim1/followers", "following_url": "https://api.github.com/users/kiyoungkim1/following{/other_user}", "gists_url": "https://api.github.com/users/kiyoungkim1/gists{/gist_id}", "starred_url": "https://api.github.com/users/kiyoungkim1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kiyoungkim1/subscriptions", "organizations_url": "https://api.github.com/users/kiyoungkim1/orgs", "repos_url": "https://api.github.com/users/kiyoungkim1/repos", "events_url": "https://api.github.com/users/kiyoungkim1/events{/privacy}", "received_events_url": "https://api.github.com/users/kiyoungkim1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[ { "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false } ]
[ "Hello!\r\n\r\nAt first, can you share your Transformers and TensorFlow version please?", "@jplu \r\nI use Transformers 4.1.1 and Tensorflow 2.4 in GCP VM, but I can change versions. I will train this model with v3-8 and v3-32.\r\nMy code and command are shown below. It is largely based on run_clm.py and run_tf_text_classification.py.\r\nThanks.\r\n\r\n```\r\nimport logging\r\nimport os\r\nfrom dataclasses import dataclass, field\r\nfrom typing import Dict, Optional\r\n\r\nimport datasets\r\nimport numpy as np\r\nimport tensorflow as tf\r\n\r\nfrom transformers import (\r\n AutoConfig,\r\n AutoTokenizer,\r\n TFAutoModel,\r\n\r\n GPT2Config,\r\n GPT2Tokenizer,\r\n BertTokenizer,\r\n TFGPT2LMHeadModel,\r\n HfArgumentParser,\r\n PreTrainedTokenizer,\r\n TFTrainer,\r\n TFTrainingArguments,\r\n)\r\n\r\ndef get_tfds(\r\n train_file: str,\r\n tokenizer: PreTrainedTokenizer,\r\n max_seq_length: Optional[int] = None,\r\n):\r\n files = {}\r\n\r\n if train_file is not None:\r\n files[datasets.Split.TRAIN] = [train_file]\r\n\r\n ds = datasets.load_dataset(\"csv\", data_files=files)\r\n features_name = 'content'\r\n transformed_ds = {}\r\n\r\n for k in files.keys():\r\n transformed_ds[k] = ds[k].map(\r\n lambda example: tokenizer.batch_encode_plus(\r\n example[features_name],\r\n truncation=True,\r\n max_length=max_seq_length\r\n ),\r\n batched=True,\r\n )\r\n\r\n def gen_train():\r\n for ex in transformed_ds[datasets.Split.TRAIN]:\r\n yield (\r\n {\r\n 'input_ids': ex[\"input_ids\"]\r\n },\r\n {\r\n 'labels': ex[\"input_ids\"]\r\n }\r\n )\r\n\r\n train_types = (\r\n {\r\n \"input_ids\": tf.int32\r\n },\r\n {\r\n \"labels\": tf.int32\r\n },\r\n )\r\n\r\n train_shapes = (\r\n {\r\n \"input_ids\": tf.TensorShape([None])\r\n },\r\n {\r\n \"labels\": tf.TensorShape([None])\r\n },\r\n )\r\n\r\n train_ds = tf.data.Dataset.from_generator(gen_train, train_types, train_shapes)\r\n\r\n\r\n if train_ds is not None:\r\n train_ds = train_ds.apply(tf.data.experimental.assert_cardinality(len(ds[datasets.Split.TRAIN])))\r\n\r\n return train_ds\r\n\r\n\r\n@dataclass\r\nclass DataTrainingArguments:\r\n \"\"\"\r\n Arguments pertaining to what data we are going to input our model for training and eval.\r\n Using `HfArgumentParser` we can turn this class\r\n into argparse arguments to be able to specify them on\r\n the command line.\r\n \"\"\"\r\n\r\n train_file: str = field(default=None, metadata={\"help\": \"The path of the training file\"})\r\n dev_file: Optional[str] = field(default=None, metadata={\"help\": \"The path of the development file\"})\r\n test_file: Optional[str] = field(default=None, metadata={\"help\": \"The path of the test file\"})\r\n max_seq_length: int = field(\r\n default=128,\r\n metadata={\r\n \"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\r\n \"than this will be truncated, sequences shorter will be padded.\"\r\n },\r\n )\r\n overwrite_cache: bool = field(\r\n default=False, metadata={\"help\": \"Overwrite the cached training and evaluation sets\"}\r\n )\r\n\r\n\r\n@dataclass\r\nclass ModelArguments:\r\n \"\"\"\r\n Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.\r\n \"\"\"\r\n\r\n model_name_or_path: str = field(\r\n metadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\r\n )\r\n config_name: Optional[str] = field(\r\n default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\r\n )\r\n tokenizer_name: Optional[str] = field(\r\n default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\r\n )\r\n use_fast: bool = field(default=False, metadata={\"help\": \"Set this flag to use fast tokenization.\"})\r\n # If you want to tweak more attributes on your tokenizer, you should do it in a distinct script,\r\n # or just modify its tokenizer_config.json.\r\n cache_dir: Optional[str] = field(\r\n default=None,\r\n metadata={\"help\": \"Where do you want to store the pretrained models downloaded from huggingface.co\"},\r\n )\r\n\r\n\r\n\r\n# See all possible arguments in src/transformers/training_args.py\r\n# or by passing the --help flag to this script.\r\n# We now keep distinct sets of args, for a cleaner separation of concerns.\r\nparser = HfArgumentParser((ModelArguments, DataTrainingArguments, TFTrainingArguments))\r\nmodel_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n\r\nif (\r\n os.path.exists(training_args.output_dir)\r\n and os.listdir(training_args.output_dir)\r\n and training_args.do_train\r\n and not training_args.overwrite_output_dir\r\n):\r\n raise ValueError(\r\n f\"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome.\"\r\n )\r\n\r\n# tokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\r\n cache_dir=model_args.cache_dir,\r\n)\r\n#tokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\n\r\n# data\r\ntrain_dataset = get_tfds(\r\n train_file=data_args.train_file,\r\n tokenizer=tokenizer,\r\n max_seq_length=data_args.max_seq_length,\r\n)\r\n\r\n# config\r\nconfig_kwargs = {\r\n \"cache_dir\": model_args.cache_dir,\r\n #\"use_auth_token\": True if model_args.use_auth_token else None,\r\n}\r\n\r\nconfig = GPT2Config.from_pretrained(model_args.model_name_or_path, **config_kwargs)\r\n\r\n# model\r\nwith training_args.strategy.scope():\r\n model = TFGPT2LMHeadModel.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_pt=bool(\".bin\" in model_args.model_name_or_path),\r\n config=config,\r\n cache_dir=model_args.cache_dir,\r\n )\r\n# model.resize_token_embeddings(len(tokenizer))\r\n\r\n# Initialize our Trainer\r\ntrainer = TFTrainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset\r\n)\r\n\r\n# Training\r\nif training_args.do_train:\r\n model_path = (\r\n model_args.model_name_or_path\r\n if (model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path))\r\n else None\r\n )\r\n train_result = trainer.train()\r\n trainer.save_model(model_path) # Saves the tokenizer too for easy upload\r\n #tokenizer.save_pretrained(training_args.output_dir)\r\n\r\n output_train_file = os.path.join(training_args.output_dir, \"train_results.txt\")\r\n if trainer.is_world_process_zero():\r\n with open(output_train_file, \"w\") as writer:\r\n logger.info(\"***** Train results *****\")\r\n for key, value in sorted(train_result.metrics.items()):\r\n logger.info(f\" {key} = {value}\")\r\n writer.write(f\"{key} = {value}\\n\")\r\n\r\n # Need to save the state, since Trainer.save_model saves only the tokenizer with the model\r\n trainer.state.save_to_json(os.path.join(training_args.output_dir, \"trainer_state.json\"))\r\n```\r\n\r\n```\r\npython3 run_tftrainer.py \\\r\n --train_file datasets.csv \\\r\n --model_name_or_path gpt2 \\\r\n --do_train \\\r\n --output_dir model \\\r\n --num_train_epochs 4 \\\r\n --per_device_train_batch_size 4 \\\r\n --logging_steps 10 \\\r\n --save_steps 10 \\\r\n --overwrite_output_dir \\\r\n --max_seq_length 128\r\n```\r\n\r\n", "Ok, thanks a lot for sharing this!\r\n\r\nIf you are using the 4.1.1 release of Transformers, the `TFGPT2LMHeadModel` has a `labels` argument so the problem might come from elsewhere. The other thing to know is that it is currently not possible to train an LM from scratch with TF until the next release (coming very soon), only fine tuning is possible for now.\r\n\r\nWhat is the error you get exactly?", "I see.\r\nError message is shown below. I have tuned many things, but the loss shows nan at best.\r\nWill it take more than a week for the next release? Then I will use pytorch/xla.\r\n\r\n\r\n```\r\nAll the layers of TFGPT2LMHeadModel were initialized from the model checkpoint at gpt2.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use TFGPT2LMHeadModel for predictions without further training.\r\nTraceback (most recent call last):\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/eager/context.py\", line 2102, in execution_mode\r\n yield\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py\", line 758, in _next_internal\r\n output_shapes=self._flat_output_shapes)\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_dataset_ops.py\", line 2610, in iterator_get_next\r\n _ops.raise_from_not_ok_status(e, name)\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py\", line 6843, in raise_from_not_ok_status\r\n six.raise_from(core._status_to_exception(e.code, message), None)\r\n File \"<string>\", line 3, in raise_from\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot batch tensors with different shapes in component 0. First element had shape [65] and element 1 had shape [31].\r\n [[{{node MultiDeviceIteratorGetNextFromShard}}]]\r\n [[RemoteCall]] [Op:IteratorGetNext]\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"run_tftrainer.py\", line 198, in <module>\r\n train_result = trainer.train()\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/transformers/trainer_tf.py\", line 548, in train\r\n for step, batch in enumerate(train_ds):\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/distribute/input_lib.py\", line 649, in __next__\r\n return self.get_next()\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/distribute/input_lib.py\", line 694, in get_next\r\n self._iterators[i].get_next_as_list_static_shapes(new_name))\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/distribute/input_lib.py\", line 1474, in get_next_as_list_static_shapes\r\n return self._iterator.get_next()\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py\", line 581, in get_next\r\n result.append(self._device_iterators[i].get_next())\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py\", line 825, in get_next\r\n return self._next_internal()\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py\", line 764, in _next_internal\r\n return structure.from_compatible_tensor_list(self._element_spec, ret)\r\n File \"/usr/lib/python3.6/contextlib.py\", line 99, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/eager/context.py\", line 2105, in execution_mode\r\n executor_new.wait()\r\n File \"/home/kykim/.local/lib/python3.6/site-packages/tensorflow/python/eager/executor.py\", line 67, in wait\r\n pywrap_tfe.TFE_ExecutorWaitForAllPendingNodes(self._handle)\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot batch tensors with different shapes in component 0. First element had shape [65] and element 1 had shape [31].\r\n [[{{node MultiDeviceIteratorGetNextFromShard}}]]\r\n [[RemoteCall]]\r\n2021-01-12 19:57:25.672632: W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.\r\n [[{{node PyFunc}}]]\r\n```", "I know that `tf.data.Dataset.from_generator` has some issues on TPUs, can you rewrite your data processing function to use `tf.data.Dataset.from_tensor_slices` instead?\r\n\r\nAlso, to be sure that your data are properly formatted you can add an assert to checks this. In order to know if the problems comes from there or not." ]
1,610
1,610
1,610
CONTRIBUTOR
null
```train_dataset``` in TFtrainer needs ```(features, labels)```, but TFGpt2 does not need labels (document in TFGPT2LMHeadModel). Do I know the data format for TFTrainer for TFGpt2? I have tried this code, but does not work. Thanks. ``` def gen_train(): for ex in transformed_ds[datasets.Split.TRAIN]: yield ( { 'input_ids': ex["input_ids"] }, { 'labels': ex["input_ids"] } ) train_types = ( { "input_ids": tf.int32 }, { "labels": tf.int32 }, ) train_shapes = ( { "input_ids": tf.TensorShape([None]) }, { "labels": tf.TensorShape([None]) }, ) train_ds = tf.data.Dataset.from_generator(gen_train, train_types, train_shapes) if train_ds is not None: train_ds = train_ds.apply(tf.data.experimental.assert_cardinality(len(ds[datasets.Split.TRAIN]))) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9530/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9528
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9528/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9528/comments
https://api.github.com/repos/huggingface/transformers/issues/9528/events
https://github.com/huggingface/transformers/issues/9528
783,846,382
MDU6SXNzdWU3ODM4NDYzODI=
9,528
Print All Tokens Over a Certain Probability Threshold: T5
{ "login": "BigSalmon2", "id": 61605789, "node_id": "MDQ6VXNlcjYxNjA1Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BigSalmon2", "html_url": "https://github.com/BigSalmon2", "followers_url": "https://api.github.com/users/BigSalmon2/followers", "following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}", "gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}", "starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions", "organizations_url": "https://api.github.com/users/BigSalmon2/orgs", "repos_url": "https://api.github.com/users/BigSalmon2/repos", "events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}", "received_events_url": "https://api.github.com/users/BigSalmon2/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nHave you checked the [T5 docs](https://huggingface.co/transformers/model_doc/t5.html) regarding the `decoder_inputs`? Are they unclear?\r\n\r\nThanks!", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,610
1,614
1,614
NONE
null
`This works with GPT-2, but not with T5. Is it possible to adapt this to make T5 work? This works with GPT-2, but not with T5. Is it possible to adapt this to make T5 work?` ``` import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("t5-base") model = AutoModelWithLMHead.from_pretrained("t5-base") input_txt = "Hello, my name is Sylvain." inputs = tokenizer(input_txt, return_tensors='pt') outputs = model(**inputs) predictions = F.softmax(outputs[0], dim=-1) thresh = 1e-2 vocab_size = predictions.shape[-1] idxs = torch.arange(0, vocab_size)[predictions[0][-1] >= thresh] print(tokenizer.convert_ids_to_tokens(idxs)) ``` `ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9528/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9527
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9527/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9527/comments
https://api.github.com/repos/huggingface/transformers/issues/9527/events
https://github.com/huggingface/transformers/issues/9527
783,821,020
MDU6SXNzdWU3ODM4MjEwMjA=
9,527
[BlenderbotSmallTokenizer] Cannot download tokenizer
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Is `blenderbot-small` a valid model_type?\r\n\r\nOr maybe this file https://huggingface.co/facebook/blenderbot_small-90M/blob/main/tokenizer_config.json is an issue?", "yeah, `blenderbot-small` is valid. One can download both the model and the config correctly:\r\n\r\n```python\r\nfrom transformers import BlenderbotSmallModel\r\n\r\nmodel = BlenderbotSmallModel.from_pretrained(\"facebook/blenderbot_small-90M\")\r\n```\r\n\r\nI think it has something to do with the tokenizers `vocab.json` file. But it's 1-to-1 the same file as in `\"facebook/blenderbot-90M\"` which can correctly be loaded...I'll have to check in more detail in the next days. There is probably a problem with the BlenderbotSmallTokenizer" ]
1,610
1,610
1,610
MEMBER
null
When running: ```python from transformers import BlenderbotSmallTokenizer tok = BlenderbotSmallTokenizer.from_pretrained("facebook/blenderbot_small-90M") ``` the command fails with the error ```~/python_bin/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs) 1894 # Instantiate tokenizer. 1895 try: -> 1896 tokenizer = cls(*init_inputs, **init_kwargs) 1897 except OSError: 1898 raise OSError( ~/python_bin/transformers/models/blenderbot_small/tokenization_blenderbot_small.py in __init__(self, vocab_file, merges_file, bos_token, eos_token, unk_token, pad_token, **kwargs) 107 108 with open(vocab_file, encoding="utf-8") as vocab_handle: --> 109 self.encoder = json.load(vocab_handle) 110 self.decoder = {v: k for k, v in self.encoder.items()} 111 with open(merges_file, encoding="utf-8") as merges_handle: /usr/lib/python3.7/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 294 cls=cls, object_hook=object_hook, 295 parse_float=parse_float, parse_int=parse_int, --> 296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) 297 298 /usr/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 346 parse_int is None and parse_float is None and 347 parse_constant is None and object_pairs_hook is None and not kw): --> 348 return _default_decoder.decode(s) 349 if cls is None: 350 cls = JSONDecoder /usr/lib/python3.7/json/decoder.py in decode(self, s, _w) 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() 339 if end != len(s): /usr/lib/python3.7/json/decoder.py in raw_decode(self, s, idx) 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: --> 355 raise JSONDecodeError("Expecting value", s, err.value) from None 356 return obj, end JSONDecodeError: Expecting value: line 1 column 1 (char 0) ``` This is strange since `"facebook/blenderbot_small-90M"` is just a copy of `"facebook/blenderbot-90M"` which works: ```python from transformers import BlenderbotSmallTokenizer tok = BlenderbotSmallTokenizer.from_pretrained("facebook/blenderbot-90M") ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9527/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9526
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9526/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9526/comments
https://api.github.com/repos/huggingface/transformers/issues/9526/events
https://github.com/huggingface/transformers/issues/9526
783,712,533
MDU6SXNzdWU3ODM3MTI1MzM=
9,526
Siamese Multi-depth Transformer-based Hierarchical Encoder
{ "login": "lalitpagaria", "id": 19303690, "node_id": "MDQ6VXNlcjE5MzAzNjkw", "avatar_url": "https://avatars.githubusercontent.com/u/19303690?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lalitpagaria", "html_url": "https://github.com/lalitpagaria", "followers_url": "https://api.github.com/users/lalitpagaria/followers", "following_url": "https://api.github.com/users/lalitpagaria/following{/other_user}", "gists_url": "https://api.github.com/users/lalitpagaria/gists{/gist_id}", "starred_url": "https://api.github.com/users/lalitpagaria/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lalitpagaria/subscriptions", "organizations_url": "https://api.github.com/users/lalitpagaria/orgs", "repos_url": "https://api.github.com/users/lalitpagaria/repos", "events_url": "https://api.github.com/users/lalitpagaria/events{/privacy}", "received_events_url": "https://api.github.com/users/lalitpagaria/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Linking Haystack issue https://github.com/deepset-ai/haystack/issues/719", "Frequent user of hugging face here, I'm a fan of this new publication and would love to see it implemented. Commenting here for the GitHub algorithm to ++", "Hi all, rather than waiting for the implementation in huggingface. Is there a simple way to utilize the pretrained model from the smith repo on our own dataset (to generate document embedding)?" ]
1,610
1,623
null
CONTRIBUTOR
null
# 🌟 New model addition ## Model description Recently Google is published paper titled ["Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching"](https://arxiv.org/abs/2004.12297). And according to paper for long-form document matching SMITH model outperforms the previous state-of-the-art models including hierarchical attention, multi-depth attention-based hierarchical recurrent neural network, and BERT. I feel it is will add value to already awesome transformers models collection :slightly_smiling_face: <!-- Important information --> ## Open source status * [X] the model implementation is available: https://github.com/google-research/google-research/tree/master/smith * [X] the model weights are available: [SMITH-WP+SP model checkpoint](http://storage.googleapis.com/gresearch/smith_gwikimatch/smith_wsp_pretrain_ckpt_opensource.zip) and [GWikiMatch data](http://storage.googleapis.com/gresearch/smith_gwikimatch/gwikimatch_open_source.zip) * [X] who are the authors: https://github.com/yangliuy, https://github.com/eladeban
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9526/reactions", "total_count": 25, "+1": 19, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 6 }
https://api.github.com/repos/huggingface/transformers/issues/9526/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/9525
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9525/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9525/comments
https://api.github.com/repos/huggingface/transformers/issues/9525/events
https://github.com/huggingface/transformers/issues/9525
783,707,880
MDU6SXNzdWU3ODM3MDc4ODA=
9,525
mBART is not saving (learned) position embeddings
{ "login": "juand-r", "id": 14251866, "node_id": "MDQ6VXNlcjE0MjUxODY2", "avatar_url": "https://avatars.githubusercontent.com/u/14251866?v=4", "gravatar_id": "", "url": "https://api.github.com/users/juand-r", "html_url": "https://github.com/juand-r", "followers_url": "https://api.github.com/users/juand-r/followers", "following_url": "https://api.github.com/users/juand-r/following{/other_user}", "gists_url": "https://api.github.com/users/juand-r/gists{/gist_id}", "starred_url": "https://api.github.com/users/juand-r/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juand-r/subscriptions", "organizations_url": "https://api.github.com/users/juand-r/orgs", "repos_url": "https://api.github.com/users/juand-r/repos", "events_url": "https://api.github.com/users/juand-r/events{/privacy}", "received_events_url": "https://api.github.com/users/juand-r/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @juand-r, \r\n\r\nThanks for the issue! I think this problem should be solved by now. We have done some major refactoring for MBart and removed the `_keys_to_ignore_on_save` for MBart. Can you check whether the error persists on current master? We will do a release tomorrow probably so that the fix should be included in the next pip version :-) ", "Thanks, @patrickvonplaten !\r\n\r\nI just checked the error is gone when using version 4.2.1.", "Hey @juand-r ,\r\n\r\nI am also trying to fine tune mBART for some non English corpus. Is there any sample script that I can follow for this task? ", "Hi @ozcangundes,\r\n\r\nThis could be helpful:\r\nhttps://github.com/GEM-benchmark/GEM-baseline-models/blob/main/examples/mbart_large_mlsum_ru.ipynb\r\n\r\n> Hey @juand-r ,\r\n> \r\n> I am also trying to fine tune mBART for some non English corpus. Is there any sample script that I can follow for this task?\r\n\r\n" ]
1,610
1,614
1,610
NONE
null
## Environment info - `transformers` version: 4.1.1 - Platform: Linux - Python version: 3.8.2 - PyTorch version (GPU?): 1.4.0 (with gpu) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten ## Information I am fine-tuning mBART-large on MLSUM (Spanish, and also Russian). However, I noticed two things: - The saved checkpoints are not saving the position embeddings (`BartLearnedPositionalEmbedding`, for both encoder and decoder). - Due to this, ROUGE scores on the validation set when evaluating on loaded checkpoints are lower than those which were shown during training. I noticed that the mBART config includes: ``` keys_to_never_save = [ "model.encoder.embed_positions.weight", "model.decoder.embed_positions.weight", ] ``` and likewise for `keys_to_ignore_on_load_missing`. I suppose this was done in response to issue [#7296](https://github.com/huggingface/transformers/issues/7296). This would be fine if the mBART position embeddings were static, but they seem to be learned. The [mbart configuration](https://github.com/huggingface/transformers/blob/master/src/transformers/models/mbart/configuration_mbart.py) shows `static_position_embeddings = False`. I can load and save the mBART model correctly if I set the following before fine-tuning: ``` mbart_model._keys_to_ignore_on_load_missing = None mbart_model._keys_to_ignore_on_save = None ``` The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM mbart_tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25") mbart_model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-cc25") ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Abstractive summarization. ## To reproduce Steps to reproduce the behavior: 1. Load the model: `mbart_model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-cc25")` 2. Fine-tune the mBART model and use `load_best_model_at_end=True`. 3. Save and load the fine-tuned model, and verify that they are different (and texts generated from them are different). 4. Setting `mbart_model._keys_to_ignore_on_load_missing = None` and `mbart_model._keys_to_ignore_on_save = None` fixes the problem (the full model is saved, and the checkpoints are correct). ## Expected behavior The model's position embeddings and generated outputs should be exactly the same after saving it and loading from disk.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9525/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9524
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9524/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9524/comments
https://api.github.com/repos/huggingface/transformers/issues/9524/events
https://github.com/huggingface/transformers/pull/9524
783,690,073
MDExOlB1bGxSZXF1ZXN0NTUzMDA3ODEw
9,524
Refactor `prepare_seq2seq_batch`
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you run the slow tests of the models concerned?", "For reference, some discussions on why the method was added in the first place:\r\nhttps://github.com/huggingface/transformers/issues/6080\r\nhttps://github.com/huggingface/transformers/pull/6103", "In general, I agree very much with your approach here and I like the idea of a context manager. From a user perspective for Seq2Seq models these are the common practices IMO:\r\n\r\n**Inference in 99% of the time**: You use generate() in 99% of the time so you tokenize only your input_ids exactly the same way you'd do it for other models (like gpt2)\r\n \r\n**Inference in 1% of the time**: In case you just want to do a single forward pass, you will have to input `input_ids` and `decoder_input_ids` -> this is usually only for special cases so one can reasonably expect the user to know how the model works. However in this case we either do need the context-manager or a `prepare_seq2seq_batch` method since the start token for the decoder is very much different from the one of the encoder. This is actually such as special case that we don't even need any magic functions for that, but just assume that the user manually prepends the `decoder_start_token_id` to `decoder_input_ids`.\r\n\r\n**training**: All seq2seq models usually only require the `labels` and `input_ids` and then the `decoder_input_ids` are automatically generated, with a method that just shifts the `labels` one to the right and adds the `decoder_start_token_id`. So far there is not a single seq2seq model that does not have this mechanism and thus all seq2seq models can be trained by only passing `input_ids` and `labels`. So I think we can assume that all seq2seq models only require `input_ids` and `labels`. => this makes them then also very similar to how BERT-like models are trained since they also just need `input_ids` and `labels`. Here the `prepare_seq2seq_batch` method is useful because it tokenizes both inputs at once and has some additional features like `src_lang` and `tgt_lang` (useful for MBart and Marian only though) and `target_max_length`, etc....but as said in the issues referenced above and mentioned in this PR as well, I do think that those are some \"magic\" functionalities that should not have their origin in `src/transformers` but better in `examples` -> so I think we agree here @sgugger .", "So only thing, I'm a bit worried about is that some users got very accustomed to the `prepare_seq2seq_batch` method so that they won't be too happy about removing it (especially since it's also doing all the `max_length` and `max_target_length` automatically.\r\n\r\nBut very much in favor of this change", "Failure is independent, due to the new tokenziers release, so merging." ]
1,610
1,610
1,610
COLLABORATOR
null
# What does this PR do? This PR refactors the logic of `prepare_seq2seq_batch` which is roughly: 1. tokenize inputs 2. make some changes to prepare the tokenizer for target encoding 3. tokenize targets 4. revert the changes made in 2 for the next tokenization by introducing a new context manage that is in charge of 2 and 4 (the method is then the same for all tokenizers, with some small exceptions). The end plan is to use this new context manage in the examples and deprecate `prepare_seq2seq_batch` before removing it in a next major version: it's as if we had a `prepare_text_classification_batch`, `prepare_token_classification_batch`... and so on for each task and also doesn't allow for the preprocessing to be done once and for all (since it's used to tokenize text on the fly right now). This for future development, the PR in itself is 100%-backward compatible.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9524/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9524", "html_url": "https://github.com/huggingface/transformers/pull/9524", "diff_url": "https://github.com/huggingface/transformers/pull/9524.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9524.patch", "merged_at": 1610493579000 }
https://api.github.com/repos/huggingface/transformers/issues/9523
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9523/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9523/comments
https://api.github.com/repos/huggingface/transformers/issues/9523/events
https://github.com/huggingface/transformers/issues/9523
783,653,658
MDU6SXNzdWU3ODM2NTM2NTg=
9,523
Documentation's example script linked does no exist anymore
{ "login": "Skylixia", "id": 12053610, "node_id": "MDQ6VXNlcjEyMDUzNjEw", "avatar_url": "https://avatars.githubusercontent.com/u/12053610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Skylixia", "html_url": "https://github.com/Skylixia", "followers_url": "https://api.github.com/users/Skylixia/followers", "following_url": "https://api.github.com/users/Skylixia/following{/other_user}", "gists_url": "https://api.github.com/users/Skylixia/gists{/gist_id}", "starred_url": "https://api.github.com/users/Skylixia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Skylixia/subscriptions", "organizations_url": "https://api.github.com/users/Skylixia/orgs", "repos_url": "https://api.github.com/users/Skylixia/repos", "events_url": "https://api.github.com/users/Skylixia/events{/privacy}", "received_events_url": "https://api.github.com/users/Skylixia/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello @Skylixia,\r\n\r\nI believe the summarisation examples have been migrated to the `seq2seq` folder here: https://github.com/huggingface/transformers/tree/master/examples/seq2seq\r\n\r\nIn the README, you can find instructions on how to fine-tune a model for summarisation: https://github.com/huggingface/transformers/tree/master/examples/seq2seq#fine-tuning-using-seq2seqtrainer\r\n\r\nHTH!", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,610
1,614
1,614
NONE
null
Hello, I'm looking at the [documentation ](https://huggingface.co/transformers/v2.2.0/examples.html#abstractive-summarization) provided examples to be able to fine-tune a summarization task. It refers to the script run_summarization_finetuning.py but the link provided: https://github.com/huggingface/transformers/blob/master/examples/run_summarization_finetuning.py returns a 404 error. Did the script migrate to another link ? Where can I find an example for the fine-tuning of a summarization task now ? Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9523/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9522
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9522/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9522/comments
https://api.github.com/repos/huggingface/transformers/issues/9522/events
https://github.com/huggingface/transformers/pull/9522
783,645,441
MDExOlB1bGxSZXF1ZXN0NTUyOTY5OTAy
9,522
[make docs] parallel build
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
CONTRIBUTOR
null
This PR enables multi-worker doc building. After experimenting with different number of workers https://github.com/huggingface/transformers/issues/9496#issuecomment-758145868 4-5 workers seems to be the most optimal - let's go with 4 as surely we wouldn't find a cpu with less cores these days. Fixes part of https://github.com/huggingface/transformers/issues/9496 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9522/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9522", "html_url": "https://github.com/huggingface/transformers/pull/9522", "diff_url": "https://github.com/huggingface/transformers/pull/9522.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9522.patch", "merged_at": 1610398809000 }
https://api.github.com/repos/huggingface/transformers/issues/9521
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9521/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9521/comments
https://api.github.com/repos/huggingface/transformers/issues/9521/events
https://github.com/huggingface/transformers/issues/9521
783,552,482
MDU6SXNzdWU3ODM1NTI0ODI=
9,521
Converting T5 (text to text transfer transformer model) checkpoints to pytorch
{ "login": "mmcs-work", "id": 28564860, "node_id": "MDQ6VXNlcjI4NTY0ODYw", "avatar_url": "https://avatars.githubusercontent.com/u/28564860?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mmcs-work", "html_url": "https://github.com/mmcs-work", "followers_url": "https://api.github.com/users/mmcs-work/followers", "following_url": "https://api.github.com/users/mmcs-work/following{/other_user}", "gists_url": "https://api.github.com/users/mmcs-work/gists{/gist_id}", "starred_url": "https://api.github.com/users/mmcs-work/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmcs-work/subscriptions", "organizations_url": "https://api.github.com/users/mmcs-work/orgs", "repos_url": "https://api.github.com/users/mmcs-work/repos", "events_url": "https://api.github.com/users/mmcs-work/events{/privacy}", "received_events_url": "https://api.github.com/users/mmcs-work/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
NONE
null
Earlier the TensorFlow models were converted using `convert_t5_original_tf_checkpoint_to_pytorch` script file. But now this file is not available anymore. Currently, (transformers 4.1.1) what is the way of converting the t5 model checkpoints to Pytorch?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9521/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9520
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9520/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9520/comments
https://api.github.com/repos/huggingface/transformers/issues/9520/events
https://github.com/huggingface/transformers/issues/9520
783,523,610
MDU6SXNzdWU3ODM1MjM2MTA=
9,520
T2TDataCollator 'target_ids' key error
{ "login": "shubhambharadwaj", "id": 18680326, "node_id": "MDQ6VXNlcjE4NjgwMzI2", "avatar_url": "https://avatars.githubusercontent.com/u/18680326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shubhambharadwaj", "html_url": "https://github.com/shubhambharadwaj", "followers_url": "https://api.github.com/users/shubhambharadwaj/followers", "following_url": "https://api.github.com/users/shubhambharadwaj/following{/other_user}", "gists_url": "https://api.github.com/users/shubhambharadwaj/gists{/gist_id}", "starred_url": "https://api.github.com/users/shubhambharadwaj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shubhambharadwaj/subscriptions", "organizations_url": "https://api.github.com/users/shubhambharadwaj/orgs", "repos_url": "https://api.github.com/users/shubhambharadwaj/repos", "events_url": "https://api.github.com/users/shubhambharadwaj/events{/privacy}", "received_events_url": "https://api.github.com/users/shubhambharadwaj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe @sgugger has an idea.", "Hi @maxie320,\r\n\r\nThe `Trainer` now removes unused keys from the dataset if the dataset is an instance of `datasets.Dataset`. By unused, it means all the keys which are not in the model's forward method's argument list. And since `target_ids` is not a argument expected by the forward it's getting removed by the `Trainer` and hence the `KeyError`\r\n\r\nYou can rename the `target_ids` key by `labels` and also change the collator accordingly which should fix this issue", "@patil-suraj It's now showing a key error on `target_attention_mask`. I'm guessing this name has been changed as well?", "Okay, figured it out, used the same name `decoder_attention_mask` in `T2TDataCollator` as given in `forward()` method argument list, thanks for the assist @patil-suraj", "Also note that you can set `remove_unused_columns=False` in your `TrainingArguments` to disable the behavior where Trainer drops the columns not in the model signature.", "Sure, thank you! @sgugger", "Closing this issue since it seems solved, don't hesitate to reopen if you have more problems!", "Hello everyone,\r\nI am trying to run the same notebook given in https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb and I have a similar problem mentioned here. I applied the possible changes mentioned here, but it could not solve my problem.\r\n\r\nThere was a problem in nightly version with import torch as mentioned https://stackoverflow.com/questions/67257008/oserror-libmkl-intel-lp64-so-1-cannot-open-shared-object-file-no-such-file-or/67479054#67479054. After I add the modifications, it throws the error below:\r\n\r\n/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n1050 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1051 return forward_call(*input, **kwargs)\r\n1052 # Do not call functions when jit is used\r\n1053 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nTypeError: forward() got an unexpected keyword argument 'lm_labels'\r\n\r\nAny ideas?\r\nThanks", "T5 expects `labels` now, not `lm_labels`. You should replace that in the return statement of your data collator\r\n```\r\nclass T2TDataCollator(DataCollator):\r\n def collate_batch(self, batch: List) -> Dict[str, torch.Tensor]:\r\n \"\"\"\r\n Take a list of samples from a Dataset and collate them into a batch.\r\n Returns:\r\n A dictionary of tensors\r\n \"\"\"\r\n input_ids = torch.stack([example['input_ids'] for example in batch])\r\n lm_labels = torch.stack([example['target_ids'] for example in batch])\r\n lm_labels[lm_labels[:, :] == 0] = -100\r\n attention_mask = torch.stack([example['attention_mask'] for example in batch])\r\n decoder_attention_mask = torch.stack([example['target_attention_mask'] for example in batch])\r\n \r\n\r\n return {\r\n 'input_ids': input_ids, \r\n 'attention_mask': attention_mask,\r\n 'lm_labels': lm_labels, \r\n 'decoder_attention_mask': decoder_attention_mask\r\n }\r\n```\r\n\r\nAlso that data collator should have a `__call__` method, not a `collate_batch`.", "Thank you for your quick reply @sgugger, now it works!" ]
1,610
1,620
1,610
NONE
null
Hi all, I'm facing issues with this part of the code (post making changes as suggested [here](https://github.com/huggingface/transformers/issues/5049)) in T5-Base for QA. ``` import dataclasses import logging import os import sys from dataclasses import dataclass, field from typing import Dict, List, Optional import numpy as np import torch from transformers import T5ForConditionalGeneration, T5Tokenizer, EvalPrediction from transformers import ( HfArgumentParser, DataCollator, Trainer, TrainingArguments, set_seed, ) logger = logging.getLogger(__name__) # prepares lm_labels from target_ids, returns examples with keys as expected by the forward method # this is necessacry because the trainer directly passes this dict as arguments to the model # so make sure the keys match the parameter names of the forward method @dataclass class T2TDataCollator: #(DataCollator) def __call__(self, batch: List) -> Dict[str, torch.Tensor]: #collate_batch """ Take a list of samples from a Dataset and collate them into a batch. Returns: A dictionary of tensors """ input_ids = torch.stack([example['input_ids'] for example in batch]) lm_labels = torch.stack([example['target_ids'] for example in batch]) lm_labels[lm_labels[:, :] == 0] = -100 attention_mask = torch.stack([example['attention_mask'] for example in batch]) decoder_attention_mask = torch.stack([example['target_attention_mask'] for example in batch]) return { 'input_ids': input_ids, 'attention_mask': attention_mask, 'lm_labels': lm_labels, 'decoder_attention_mask': decoder_attention_mask } ``` Which is fetching this error:- ``` Exception in thread Thread-12: Traceback (most recent call last): File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/usr/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 133, in _loader_worker _, data = next(data_iter) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 517, in __next__ data = self._next_data() File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 557, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "<ipython-input-7-7b8c1b4d4c9a>", line 36, in __call__ lm_labels = torch.stack([example['target_ids'] for example in batch]) File "<ipython-input-7-7b8c1b4d4c9a>", line 36, in <listcomp> lm_labels = torch.stack([example['target_ids'] for example in batch]) KeyError: 'target_ids' ``` My train and validation dataset has `'target_ids'` field (read from `datasets.Dataset.from_pandas()` method and mapped the `add_eos_to_examples` and `convert_to_features` successfully): `train_dataset['target_ids']` ``` tensor([[ 1027, 9533, 3440, ..., 0, 0, 0], [ 7327, 1387, 11597, ..., 0, 0, 0], [ 272, 5, 7130, ..., 0, 0, 0], ..., [15810, 1, 0, ..., 0, 0, 0], [ 7107, 1, 0, ..., 0, 0, 0], [ 454, 5, 134, ..., 0, 0, 0]]) ``` `valid_dataset['target_ids']` ``` tensor([[15810, 1, 0, ..., 0, 0, 0], [ 4190, 4329, 1, ..., 0, 0, 0], [ 4329, 11, 7107, ..., 0, 0, 0], ..., [ 3, 4, 1, ..., 0, 0, 0], [ 3, 4, 1, ..., 0, 0, 0], [ 8642, 4425, 9, ..., 0, 0, 0]]) ``` I am unable to fetch this field using class `T2TDataCollator:`. Please assist, thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9520/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9519
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9519/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9519/comments
https://api.github.com/repos/huggingface/transformers/issues/9519/events
https://github.com/huggingface/transformers/pull/9519
783,494,346
MDExOlB1bGxSZXF1ZXN0NTUyODQ1MDky
9,519
Update 'Develop on Windows' guidelines
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Do you have any idea why GitHub is not showing the diff properly?", "Looks like I'm having an issue with CRLFs :( I think I replaced all CRLFs by LFs\r\nI'm currently investigating this", "@sgugger problem solved 👌" ]
1,610
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? Update the `Develop on Windows` guidelines in `CONTRIBUTING.md` to add: - Instructions to setup git to handle CRLF line endings - Instructions to add MSYS executables in your PATH to run `make` from another terminal Fixes #9438 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @sgugger @jplu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9519/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9519", "html_url": "https://github.com/huggingface/transformers/pull/9519", "diff_url": "https://github.com/huggingface/transformers/pull/9519.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9519.patch", "merged_at": 1610442917000 }
https://api.github.com/repos/huggingface/transformers/issues/9518
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9518/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9518/comments
https://api.github.com/repos/huggingface/transformers/issues/9518/events
https://github.com/huggingface/transformers/issues/9518
783,471,658
MDU6SXNzdWU3ODM0NzE2NTg=
9,518
Model Hub hanging in model's loading
{ "login": "loretoparisi", "id": 163333, "node_id": "MDQ6VXNlcjE2MzMzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loretoparisi", "html_url": "https://github.com/loretoparisi", "followers_url": "https://api.github.com/users/loretoparisi/followers", "following_url": "https://api.github.com/users/loretoparisi/following{/other_user}", "gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}", "starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions", "organizations_url": "https://api.github.com/users/loretoparisi/orgs", "repos_url": "https://api.github.com/users/loretoparisi/repos", "events_url": "https://api.github.com/users/loretoparisi/events{/privacy}", "received_events_url": "https://api.github.com/users/loretoparisi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Adding some more info:\r\n\r\nThe api call returns to the model endpoint `503 (Service Unavailable)` and the error message\r\n```json\r\n {\"error\":\"Model Musixmatch/umberto-wikipedia-uncased-v1 is currently loading\",\"estimated_time\":10}\r\n```\r\n\r\nThen while the model is loading a new error comes out:\r\n\r\n```\r\nbundle.5e4ae99.js:1 Uncaught (in promise) TypeError: Failed to fetch\r\n```\r\n\r\nThank you!", "pinging @Narsil ! :)", "Hi @loretoparisi ,\r\n\r\nSorry for the delayed answer. The problem was linked to you tokenizer that somehow had a failure when it was transformed automatically into a Fast one. (Actually it worked well, but the result could not be saved properly). I fixed your tokenizer by adding the precomputed result for Fast tokenizer:\r\n\r\nhttps://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1/commit/483eca5f6b781ddb811e590fb584cc2e1d2b662e\r\n\r\nEverything seems to be working properly now (and loads fast)\r\n", "@Narsil the inference outputs seem weird though, like the tokenizer doesn't uncase inputs: https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1?text=Roma+%C3%A8+la+%3Cmask%3E+d%27Italia\r\n\r\n<img width=\"709\" alt=\"Screenshot 2021-01-22 at 19 47 53\" src=\"https://user-images.githubusercontent.com/326577/105532172-79ac3980-5cb8-11eb-908d-9d9c7b3b80d9.png\">\r\n", "- <unk> are explainable because this model uses only lowercase, so all MAJs are unks.\r\n- c/a at start end was an error in the config (it might be because, there are some automatic fixed offsets for Camembert that might not actually be used by this model).\r\n- The fact that some output are different from others is simply hardcoded in the widget (and is not correct IMHO)", "@Narsil thank you for your help, there is anything that we can do/test by our side? cc @simonefrancia \r\nThanks!", "> * `<unk>` are explainable because this model uses only lowercase, so all MAJs are unks.\r\n\r\nSure, this means that there's some missing config for the tokenizer. See this model for example: https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France.\r\n\r\n\r\n\r\n> * The fact that some output are different from others is simply hardcoded in the widget (and is not correct IMHO)\r\n\r\nnot sure what you mean here. cc @n1t0 ", "> Sure, this means that there's some missing config for the tokenizer. See this model for example: https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France.\r\n\r\nI can't make any choice about what's more reasonable for the end model, the current tokenizer is exactly what `sentencepiece` would do (we export all variables from it, by using the `precompiled_charsmap`).\r\n@loretoparisi if you want to actually force lowercasing input you can by changing `normalizer` within `tokenizer.json` to `Sequence` with a `Lowercase` then the `precompiled_charsmap`. But be aware that you won't have the same results as the raw SPM tokenizer anymore. Let me know if you want to do that I can do it, but again be careful of the impacts it could have for the model.\r\n\r\n> not sure what you mean here. cc @n1t0\r\n\r\nThis: https://github.com/huggingface/moon-landing/blob/master/front/js-src/lib/widgets/text-classification.ts#L45\r\nAlso see PR on transformers that could solve this : https://github.com/huggingface/transformers/pull/9783", "@Narsil I think there are several different things going on here:\r\n - The input doesn't get lowercased. This is true for both the fast and slow tokenizers, so yes, the conversion from slow to fast went well, but there's still a question about whether this should be fixed somehow (since the config contains `do_lowercase=True`, I think it was expected). If yes, both slow and fast tokenizers should be fixed.\r\n - Even if we don't look at the `<unk>`, the output still seems weird. After digging a bit, it seems that the IDs generated by the fast version of the tokenizer are not aligned with the slow one:\r\n```python\r\nfrom transformers import AutoTokenizer, pipeline\r\n\r\n\r\ndef run_input(input):\r\n tok_slow = AutoTokenizer.from_pretrained(\"Musixmatch/umberto-wikipedia-uncased-v1\", use_fast=False)\r\n p_slow = pipeline(\"fill-mask\", model=\"Musixmatch/umberto-wikipedia-uncased-v1\", tokenizer=tok_slow)\r\n ids_slow = tok_slow.encode(input)\r\n p_output_slow = p_slow(input)\r\n\r\n tok_fast = AutoTokenizer.from_pretrained(\"Musixmatch/umberto-wikipedia-uncased-v1\", use_fast=True)\r\n p_fast = pipeline(\"fill-mask\", model=\"Musixmatch/umberto-wikipedia-uncased-v1\", tokenizer=tok_fast)\r\n ids_fast = tok_fast.encode(input)\r\n p_output_fast = p_fast(input)\r\n\r\n print(\"Running with input: \", input)\r\n print(\"SLOW:\")\r\n print(ids_slow)\r\n print(p_output_slow)\r\n\r\n print(\"FAST:\")\r\n print(ids_fast)\r\n print(p_output_fast)\r\n\r\n\r\nrun_input(\"Roma è la <mask> d'Italia\")\r\nrun_input(\"roma è la <mask> d'italia\")\r\n```\r\nGives the following output:\r\n```python\r\nRunning with input: Roma è la <mask> d'Italia\r\nSLOW:\r\n[5, 31908, 3, 31912, 79, 97, 51, 32004, 7, 31931, 3, 11007, 6]\r\n[\r\n{'sequence': \"<s> <unk>oma è la lingua d'<unk>talia</s>\", 'score': 0.04120568186044693, 'token': 1476, 'token_str': '▁lingua'}, \r\n{'sequence': \"<s> <unk>oma è la città d'<unk>talia</s>\", 'score': 0.023448798805475235, 'token': 521, 'token_str': '▁città'}, \r\n{'sequence': \"<s> <unk>oma è la dea d'<unk>talia</s>\", 'score': 0.022841867059469223, 'token': 4591, 'token_str': '▁dea'}, \r\n{'sequence': \"<s> <unk>oma è la terra d'<unk>talia</s>\", 'score': 0.02243848517537117, 'token': 1415, 'token_str': '▁terra'}, \r\n{'sequence': \"<s> <unk>oma è la capitale d'<unk>talia</s>\", 'score': 0.01755419932305813, 'token': 3152, 'token_str': '▁capitale'}\r\n]\r\nFAST:\r\n[1, 31904, 0, 31908, 75, 93, 47, 32001, 3, 31927, 0, 11003, 2]\r\n[\r\n{'sequence': \"<s> <unk>oma è laà d'<unk>talia</s>\", 'score': 0.4644460380077362, 'token': 31936, 'token_str': 'à'},\r\n{'sequence': \"<s> <unk>oma è la<mask> d'<unk>talia</s>\", 'score': 0.41339975595474243, 'token': 32001, 'token_str': '<mask>'},\r\n{'sequence': \"<s> <unk>oma è laena d'<unk>talia</s>\", 'score': 0.02151116542518139, 'token': 408, 'token_str': 'ena'},\r\n{'sequence': \"<s> <unk>oma è laè d'<unk>talia</s>\", 'score': 0.01422190386801958, 'token': 31935, 'token_str': 'è'},\r\n{'sequence': \"<s> <unk>oma è la ten d'<unk>talia</s>\", 'score': 0.0057907504960894585, 'token': 685, 'token_str': '▁ten'}\r\n]\r\n\r\nRunning with input: roma è la <mask> d'italia\r\nSLOW:\r\n[5, 764, 97, 51, 32004, 7, 31931, 31911, 11007, 6]\r\n[\r\n{'sequence': \"<s> roma è la bandiera d'italia</s>\", 'score': 0.13166911900043488, 'token': 3525, 'token_str': '▁bandiera'},\r\n{'sequence': \"<s> roma è la capitale d'italia</s>\", 'score': 0.0553407184779644, 'token': 3152, 'token_str': '▁capitale'},\r\n{'sequence': \"<s> roma è la nazionale d'italia</s>\", 'score': 0.04516282677650452, 'token': 918, 'token_str': '▁nazionale'},\r\n{'sequence': \"<s> roma è la zona d'italia</s>\", 'score': 0.022440679371356964, 'token': 1740, 'token_str': '▁zona'},\r\n{'sequence': \"<s> roma è la regione d'italia</s>\", 'score': 0.02204475924372673, 'token': 1472, 'token_str': '▁regione'}\r\n]\r\nFAST:\r\n[1, 760, 93, 47, 32001, 3, 31927, 31907, 11003, 2]\r\n[\r\n{'sequence': \"<s> roma è la<mask> d'italia</s>\", 'score': 0.9972749352455139, 'token': 32001, 'token_str': '<mask>'},\r\n{'sequence': \"<s> roma è laà d'italia</s>\", 'score': 0.001777052297256887, 'token': 31936, 'token_str': 'à'},\r\n{'sequence': \"<s> roma è la pai d'italia</s>\", 'score': 0.00022994846221990883, 'token': 14871, 'token_str': '▁pai'},\r\n{'sequence': \"<s> roma è la raffigura d'italia</s>\", 'score': 0.00011272338451817632, 'token': 15184, 'token_str': '▁raffigura'},\r\n{'sequence': \"<s> roma è la hiv d'italia</s>\", 'score': 0.00011238666047574952, 'token': 28952, 'token_str': '▁hiv'}\r\n]\r\n```\r\nAs you can see, the output using the slow tokenizer seems fine, while the other doesn't.", "Okay this is now fixed: https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1/commit/713d59922ccb4b5fc31a527ce2d785c23533363b\r\n\r\nThis 4 offset in the tokens is hardcoded for Camembert based tokenizers :\r\nhttps://github.com/huggingface/transformers/blob/937f67074d6728f145d54d6ea87221a46303363d/src/transformers/models/camembert/tokenization_camembert.py#L241\r\n\r\nI'm doing a pass on all BPE based spm to check various behaviors.", "Hi @Narsil,\r\nthanks for your support in Umberto.\r\nthanks also for making Umberto wikipedia alive again.\r\n\r\nWe see something not usual that replaces <mask> token.\r\n![Schermata 2021-02-10 alle 11 34 12](https://user-images.githubusercontent.com/7140210/107498550-3ac12380-6b94-11eb-9aa3-dc5498486d12.png)\r\n\r\nIn this example mask token is not replaced by a single BPE token, but an entire sentence and that sounds strange.\r\nIf there is something that we can do on our side, let us know.\r\nThanks", "@simonefrancia are you referring to the third result in the screenshot?", "@julien-c yes. My doubt is that input sentence is repeated for the third result.", "I think it's the widget's intended behavior for BPE when we are not able to display the BPE token by itself. But we can take a look...\r\n\r\nHow are the other results, are they sensible?", "I confirm it's the widget because suggested result is len <2, it's trying to repeat the full sentence instead of just the token.\r\nAnd the first C is ignored because it's a `<unk>` from the tokenizer's standpoint. ", "I found other interesting cases, for example this one, when mask is at starting point.\r\n\r\n![Schermata 2021-02-10 alle 11 56 35](https://user-images.githubusercontent.com/7140210/107501006-23d00080-6b97-11eb-97f8-7cfee6744b60.png)\r\n\r\nIn case we don't specify anything before `<mask>`, something goes wrong. My doubt is that in this case `<mask>` token is replaced by `<s>` token. I tried to insert `<s>` before `<mask>` token and it works.\r\n\r\n![Schermata 2021-02-10 alle 11 53 19](https://user-images.githubusercontent.com/7140210/107500795-e4091900-6b96-11eb-91a0-851cc0015aa1.png)\r\n\r\nHope this can help you.\r\n", "@Narsil Ok, but is it possible to force output that would be `<unk>` (because uppercase) to lowercase, in order that `<unk>` tokens can't appear? wikipedia model is lower case, so we can force to treat only lowercase words.\r\nThanks", "Hi @simonefrancia.\r\nIn order to force lowercase, you can do it in the Fast tokenizer but that would lead to different results between Slow and Fast tokenizers again.\r\n\r\n> @loretoparisi if you want to actually force lowercasing input you can by changing normalizer within tokenizer.json to Sequence with a Lowercase then the precompiled_charsmap. But be aware that you won't have the same results as the raw SPM tokenizer anymore. Let me know if you want to do that I can do it, but again be careful of the impacts it could have for the model.\r\n\r\nAs for the widget, a fix is coming (it's really a display issue, if you look at the raw results it should make more sense).", "I opened a new issue to keep track of the lowercasing issue as this is something that would probably be helpful for many tokenizers. (cf #10121)\r\n\r\nI believe everything else has been fixed, has it?", "I think so but I'll let @simonefrancia confirm.", "for [umberto-wikipedia](https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1) I think that's all, guys. Thanks\r\nInstead, for [umberto-commoncrawl ](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1)model keeps loading. Also there are same problems in tokenizer? \r\n\r\n\r\n", "Yes it's the same problem. Do you want me to fix it in the same way ? (Hopefully this time it works right off the bat.)\r\n\r\nAre there any other models that could be under the same flag ? (I detected only this one during my full sweep for your organization)", "For our organization, we have only two models, umberto-wikipedia ( the one you fixed) and umberto-commoncrawl ( the one to be fixed). \r\nUmberto commoncrawl is cased, so maybe it could be a different problem or a different way to be fixed, but we would like it works. \r\nthanks for your support", "It's fixed now : https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1?text=Lo+scopo+della+vita+%C3%A8+%3Cmask%3E.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,618
1,618
CONTRIBUTOR
null
@Narsil when loading some models, the loading hangs at 80-90%. <img width="768" alt="Schermata 2021-01-11 alle 16 17 31" src="https://user-images.githubusercontent.com/163333/104202185-e0b12f00-542a-11eb-9e34-27f88ca232ab.png"> In this case it's [this](https://huggingface.co/mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it?text=Dove+vivo%3F) one.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9518/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9518/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9517
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9517/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9517/comments
https://api.github.com/repos/huggingface/transformers/issues/9517/events
https://github.com/huggingface/transformers/issues/9517
783,456,077
MDU6SXNzdWU3ODM0NTYwNzc=
9,517
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
{ "login": "duttaprat", "id": 29531232, "node_id": "MDQ6VXNlcjI5NTMxMjMy", "avatar_url": "https://avatars.githubusercontent.com/u/29531232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/duttaprat", "html_url": "https://github.com/duttaprat", "followers_url": "https://api.github.com/users/duttaprat/followers", "following_url": "https://api.github.com/users/duttaprat/following{/other_user}", "gists_url": "https://api.github.com/users/duttaprat/gists{/gist_id}", "starred_url": "https://api.github.com/users/duttaprat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/duttaprat/subscriptions", "organizations_url": "https://api.github.com/users/duttaprat/orgs", "repos_url": "https://api.github.com/users/duttaprat/repos", "events_url": "https://api.github.com/users/duttaprat/events{/privacy}", "received_events_url": "https://api.github.com/users/duttaprat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! Could you please put the information required in the issue template? I.e, everything related to your environment.", "@LysandreJik I have updated the original question based on your suggestion. ", "You're loading your configuration with:\r\n\r\n```py\r\nconfig = BertConfig.from_pretrained(\"models1/our_fine_tuned_model_definition+comment.pt\", output_hidden_states=True) \r\n```\r\n\r\nIs `models1/our_fine_tuned_model_definition+comment.pt` a directory containing a `config.json` file?", "No, that folder does not contain any `config.json` file. Actually, I took the pretrained SciBERT model and saved it in my local system using the following comment \r\n\r\n```\r\npath_to_model='models1/our_fine_tuned_model_definition+comment.pt'\r\ntorch.save(net_copy.state_dict(), path_to_model)\r\n```\r\n\r\nAs a novice, I am not sure how to save the `config.json` file. Please help me with that. \r\n\r\nThanks in advance. ", "I recommend reading the [quickstart (#using-the-model)](https://huggingface.co/transformers/quicktour.html#using-the-model) to understand the loading/saving of models!\r\n\r\nI guess you loaded the model this way:\r\n\r\n```py\r\nfrom transformers import AutoModel\r\n\r\nmodel = AutoModel.from_pretrained(\"allenai/scibert_scivocab_cased\")\r\n```\r\nyou should save the model like this:\r\n```py\r\nmodel.save_pretrained(\"directory\")\r\n```\r\n\r\nThis will create a directory containing the following:\r\n\r\n```py\r\n!ls directory\r\nconfig.json pytorch_model.bin\r\n```\r\n\r\nYou can then load your configuration from that very easily:\r\n\r\n```py\r\nBertConfig.from_pretrained(\"directory\")\r\n```\r\nor load the model directly using `AutoModel` or `BertModel`:\r\n```py\r\nAutoModel.from_pretrained(\"directory\")\r\n# or\r\nBertModel.from_pretrained(\"directory\")\r\n```", "Please note that when downloading the `allenai/scibert_scivocab_cased` model, it's cached in your system. You can then freely reload it with the same identifier without re-downloading the model.\r\n\r\nExcept if you modify the model, for example by fine-tuning, you shouldn't need to save it to disk manually.", "@LysandreJik Thanks a lot ", "My pleasure. Closing the issue as resolved." ]
1,610
1,610
1,610
NONE
null
When I trying to load a saved fine-tuned BERT model, I am facing 'UnicodeDecodeError'. The sample code is ``` from transformers import AutoTokenizer, AutoModel, AdamW, get_linear_schedule_with_warmup, BertConfig self.bert_layer = AutoModel.from_pretrained(bert_model) config = BertConfig.from_pretrained("models1/our_fine_tuned_model_definition+comment.pt", output_hidden_states=True) state_dict = torch.load("models1/our_fine_tuned_model_definition+comment.pt", map_location=torch.device('cpu')) self.bert_layer.load_state_dict(state_dict, config=config) ``` ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Ubuntu - Python version: 3.7.9 - PyTorch version (GPU?): 1.6.0(yes) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ## Error: ``` Traceback (most recent call last): File "PhenoBERT_trained_using_finetuned_model_1.py", line 374, in <module> net = SentencePairClassifier(bert_model, freeze_bert=freeze_bert) File "PhenoBERT_trained_using_finetuned_model_1.py", line 107, in __init__ config = BertConfig.from_pretrained("models1/our_fine_tuned_model_definition+comment.pt", output_hidden_states=True) File "/home/pratik/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 315, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/pratik/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 360, in get_config_dict config_dict = cls._dict_from_json_file(resolved_config_file) File "/home/pratik/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 442, in _dict_from_json_file text = reader.read() File "/home/pratik/anaconda3/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9517/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9517/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9516
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9516/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9516/comments
https://api.github.com/repos/huggingface/transformers/issues/9516/events
https://github.com/huggingface/transformers/pull/9516
783,435,994
MDExOlB1bGxSZXF1ZXN0NTUyNzk1NjEx
9,516
Make doc styler behave properly on Windows
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
COLLABORATOR
null
# What does this PR do? This is code that should have been pushed in #9488 but wasn't because... Friday afternoon and my brain was apparently fried. Making a clean PR of it! Fixes #9438
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9516/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9516/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9516", "html_url": "https://github.com/huggingface/transformers/pull/9516", "diff_url": "https://github.com/huggingface/transformers/pull/9516.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9516.patch", "merged_at": 1610378725000 }
https://api.github.com/repos/huggingface/transformers/issues/9515
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9515/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9515/comments
https://api.github.com/repos/huggingface/transformers/issues/9515/events
https://github.com/huggingface/transformers/issues/9515
783,417,974
MDU6SXNzdWU3ODM0MTc5NzQ=
9,515
Can't run T5 models because of missing protoc
{ "login": "205g0", "id": 74575852, "node_id": "MDQ6VXNlcjc0NTc1ODUy", "avatar_url": "https://avatars.githubusercontent.com/u/74575852?v=4", "gravatar_id": "", "url": "https://api.github.com/users/205g0", "html_url": "https://github.com/205g0", "followers_url": "https://api.github.com/users/205g0/followers", "following_url": "https://api.github.com/users/205g0/following{/other_user}", "gists_url": "https://api.github.com/users/205g0/gists{/gist_id}", "starred_url": "https://api.github.com/users/205g0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/205g0/subscriptions", "organizations_url": "https://api.github.com/users/205g0/orgs", "repos_url": "https://api.github.com/users/205g0/repos", "events_url": "https://api.github.com/users/205g0/events{/privacy}", "received_events_url": "https://api.github.com/users/205g0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "If you install SentencePiece `pip install sentencepiece`, do you still get that error?", "> If you install SentencePiece `pip install sentencepiece`, do you still get that error?\r\n\r\nI had it already installed: `sentencepiece==0.1.91`", "FWIW: when `I import protoc` in e.g. `ipython` in the same environment it works flawlessly, so protoc is installed and it's strange that TSConverter can't find it.", "Ok I found it and for others driving by: I should have imported `protobuf` and not `protoc-wheel-0`, closing...", "`pip install protobuf` solved it for me", "> `pip install protobuf` solved it for me\r\n\r\nI too didn't have to downgrade it, just installed a missing `protobuf` (latest version). This can be reproduced in e.g. a Hugging Face example for e.g. DONUT document classifier using our latest CUDA 11.8 containers: `mirekphd/cuda-11.8-cudnn8-devel-ubuntu22.04:20230928`. Note that the official `nvidia/cuda/11.8.0-cudnn8-devel-ubuntu22.04` containers seem to come with `protobuf` already preinstalled, so you won't reproduce the bug there).\r\n\r\nPerhaps `protobuf` should be added explicitly as a dependency of `transformers`?", "Hi\r\n\r\n`transformers` has a lot of models involved, and if we put everything as direct dependency, it would be very long and heavy to install. Even `torch` is not a direct dependency :-)\r\n\r\nYou can always install it as `pip install transformers[dev]` and I believe you will get `protobuf` (along with a lot of stuff).", "I'm still facing the same error. I have fine tuned mistral model, but I'm trying to inference it, it's still giving me:\r\n\r\nCould not complete request to HuggingFace API, Status Code: 500, Error: \\nLlamaConverter requires the protobuf library but it was not found in your environment. Checkout the instructions on the\\ninstallation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones\\nthat match your environment. Please note that you may need to restart your runtime after installation.\\n\r\n\r\n\r\nI've done: pip install protobuf, in both env (fine tuning and inferencing)" ]
1,610
1,703
1,610
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Linux-4.15.0-128-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.1+cpu (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten @patil-suraj @dwadden ## Information Model I am using T5, I tried: - allenai/unifiedqa-t5-large The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("allenai/unifiedqa-t5-large") model = AutoModelForSeq2SeqLM.from_pretrained("allenai/unifiedqa-t5-large") ``` The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) question answering ## To reproduce Steps to reproduce the behavior: 1. Install all dependencies 2. Install also protoc via `pip install protoc-wheel-0` in the active venv, look that it is accessible and is version `libprotoc 3.14.0` 3. run the above code for model initialization <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior model is initialized property without any error ## Actual behavior I still get... ``` ... ImportError: T5Converter requires the protobuf library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones that match your environment. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9515/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9515/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9514
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9514/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9514/comments
https://api.github.com/repos/huggingface/transformers/issues/9514/events
https://github.com/huggingface/transformers/pull/9514
783,376,203
MDExOlB1bGxSZXF1ZXN0NTUyNzQ1OTAw
9,514
[ProphetNet] Fix naming and wrong config
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Thank you @patrickvonplaten the changes look great! One last suggestion on my side since this PR does some renaming of the modules: I believe the naming of the `ProphetNetSelfAttention` is misleading, since it is used as a cross attention in the decoder layer:\r\n> \r\n> https://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/prophetnet/modeling_prophetnet.py#L1082\r\n> \r\n> \r\n> Maybe a more appropriate name would be `ProphetNetBaseAttention` or simply `ProphetNetAttention` ?\r\n> There is also a typo in `ProhpetNetPositionalEmbeddings` and `ProhpetNetFeedForward`\r\n\r\nI agree with you! I should be more careful next time when naming the classes :-) " ]
1,610
1,610
1,610
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> @guillaume-be would be great if you can review here as well This PR fixes a bad naming and wrong usage of the config parameters. Since all prophet models online have ```config.num_encoder_attention_heads==config.num_decoder_attention_heads``` this change should not lead to any problems. Luckily it was caught early on by @guillaume-be Fixes #9485 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9514/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9514", "html_url": "https://github.com/huggingface/transformers/pull/9514", "diff_url": "https://github.com/huggingface/transformers/pull/9514.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9514.patch", "merged_at": 1610442606000 }
https://api.github.com/repos/huggingface/transformers/issues/9513
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9513/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9513/comments
https://api.github.com/repos/huggingface/transformers/issues/9513/events
https://github.com/huggingface/transformers/pull/9513
783,354,780
MDExOlB1bGxSZXF1ZXN0NTUyNzI4MDAx
9,513
[TF Led] Fix flaky TF Led test
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @LysandreJik @sgugger @jplu ", "Thanks for fixing!" ]
1,610
1,610
1,610
MEMBER
null
# What does this PR do? The reason why the TF LED test is flaky was not fully fixed in: https://github.com/huggingface/transformers/pull/9459 and is actually the following: Currently the `decoder_attention_mask` can have a `0` at its first input: ```python decoder_attention_mask[:, 0] == 0 ``` Since the decoder uses a causal mask, this however leads to problems as a softmax over only very large negative numbers in computed. Now since TF and PT use slightly different large numbers, we can see significant differences between the models. The solution is to make sure that the `decoder_attention_mask` used for the `tf_pt_equivalence` test cannot be zero at the first position (I've done the same changes for all TFBart models in: https://github.com/huggingface/transformers/pull/9497 and also made sure in https://github.com/huggingface/transformers/pull/9497 that the TF templates are correctly updated )
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9513/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9513", "html_url": "https://github.com/huggingface/transformers/pull/9513", "diff_url": "https://github.com/huggingface/transformers/pull/9513.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9513.patch", "merged_at": 1610370889000 }
https://api.github.com/repos/huggingface/transformers/issues/9512
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9512/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9512/comments
https://api.github.com/repos/huggingface/transformers/issues/9512/events
https://github.com/huggingface/transformers/pull/9512
783,340,668
MDExOlB1bGxSZXF1ZXN0NTUyNzE2MDU5
9,512
Fix template
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? This PR fixes the TF template for BERT-like models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9512/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9512", "html_url": "https://github.com/huggingface/transformers/pull/9512", "diff_url": "https://github.com/huggingface/transformers/pull/9512.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9512.patch", "merged_at": 1610370209000 }
https://api.github.com/repos/huggingface/transformers/issues/9511
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9511/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9511/comments
https://api.github.com/repos/huggingface/transformers/issues/9511/events
https://github.com/huggingface/transformers/pull/9511
783,253,997
MDExOlB1bGxSZXF1ZXN0NTUyNjQyODA5
9,511
Shouldn't stale issues/PRs with feature request label
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "These need to be applied manually; we could probably do some of these automatically but haven't thought about that yet. While we think of this we can apply the label manually as feature requests come up." ]
1,610
1,610
1,610
MEMBER
null
Shouldn't stale issues/PRs with feature request label
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9511/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9511", "html_url": "https://github.com/huggingface/transformers/pull/9511", "diff_url": "https://github.com/huggingface/transformers/pull/9511.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9511.patch", "merged_at": 1610444956000 }
https://api.github.com/repos/huggingface/transformers/issues/9510
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9510/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9510/comments
https://api.github.com/repos/huggingface/transformers/issues/9510/events
https://github.com/huggingface/transformers/issues/9510
783,244,474
MDU6SXNzdWU3ODMyNDQ0NzQ=
9,510
config.json not found when loading fasttext-language-id model
{ "login": "nbeuchat", "id": 8236283, "node_id": "MDQ6VXNlcjgyMzYyODM=", "avatar_url": "https://avatars.githubusercontent.com/u/8236283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nbeuchat", "html_url": "https://github.com/nbeuchat", "followers_url": "https://api.github.com/users/nbeuchat/followers", "following_url": "https://api.github.com/users/nbeuchat/following{/other_user}", "gists_url": "https://api.github.com/users/nbeuchat/gists{/gist_id}", "starred_url": "https://api.github.com/users/nbeuchat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbeuchat/subscriptions", "organizations_url": "https://api.github.com/users/nbeuchat/orgs", "repos_url": "https://api.github.com/users/nbeuchat/repos", "events_url": "https://api.github.com/users/nbeuchat/events{/privacy}", "received_events_url": "https://api.github.com/users/nbeuchat/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[ { "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false } ]
[ "Hi @nbeuchat this is a fasttext model, not a `transformers` model, so you can't load it that way.\r\n\r\nI've updated the main button on the webpage to make it clearer that you need to use the model in fasttext:\r\n\r\n<img width=\"1067\" alt=\"Screenshot 2021-01-11 at 19 21 07\" src=\"https://user-images.githubusercontent.com/326577/104222282-11d03600-5410-11eb-9b03-307fc776f197.png\">\r\n<img width=\"885\" alt=\"Screenshot 2021-01-11 at 19 21 56\" src=\"https://user-images.githubusercontent.com/326577/104222284-14329000-5410-11eb-842a-05fa8e05c1ca.png\">\r\n\r\n\r\n", "Also cc'ing @thomwolf and @celebio <3", "Got it, thanks for the info and for the quick update! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,618
1,618
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Linux-5.4.0-60-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @julien-c ## Information Model I am using (Bert, XLNet ...): [julien-c/fasttext-language-id](https://huggingface.co/julien-c/fasttext-language-id) The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("julien-c/fasttext-language-id") model = AutoModel.from_pretrained("julien-c/fasttext-language-id") ``` Which returns the following error: ``` 404 Client Error: Not Found for url: https://huggingface.co/julien-c/fasttext-language-id/resolve/main/config.json ``` ## Expected behavior The model should load or the config file should be present
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9510/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9509
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9509/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9509/comments
https://api.github.com/repos/huggingface/transformers/issues/9509/events
https://github.com/huggingface/transformers/issues/9509
783,210,973
MDU6SXNzdWU3ODMyMTA5NzM=
9,509
[Benchmark]onnx-export
{ "login": "Zjq9409", "id": 62974595, "node_id": "MDQ6VXNlcjYyOTc0NTk1", "avatar_url": "https://avatars.githubusercontent.com/u/62974595?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zjq9409", "html_url": "https://github.com/Zjq9409", "followers_url": "https://api.github.com/users/Zjq9409/followers", "following_url": "https://api.github.com/users/Zjq9409/following{/other_user}", "gists_url": "https://api.github.com/users/Zjq9409/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zjq9409/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zjq9409/subscriptions", "organizations_url": "https://api.github.com/users/Zjq9409/orgs", "repos_url": "https://api.github.com/users/Zjq9409/repos", "events_url": "https://api.github.com/users/Zjq9409/events{/privacy}", "received_events_url": "https://api.github.com/users/Zjq9409/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @jianqianzhou,\r\n\r\nThanks for raising this issue.\r\n\r\nI would remove the `OMP_NUM_THREADS` environment variable to fully exploit all the cores/threads you have on your machine. Also, tests were run on a machine with 56 cores so it might impact final performances. \r\n\r\nAlso, it might be possible to further optimize the model / quantized model through the ONNX Runtime optimizer tool.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,619
1,619
NONE
null
# 🖥 Benchmarking `transformers` ## Benchmark I follow 04-onnx-export.ipynb this guidance on CPU, and my CPU model is: Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz NUMA node0 CPU(s): 0-9,20-29 NUMA node1 CPU(s): 10-19,30-39 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities ## Set-up I set the environment is: export OMP_NUM_THREADS = 8 export OMP_WAIT_POLICY= 'ACTIVE' then I run all the test program py taskset -c 0-7 python test_.py ## Results when I finish the test program, I print all result,like this: dict_keys(['PyTorch CPU', 'ONNX CPU', 'PyTorch CPU Quantized', 'ONNX CPU Quantized']) dict_values([94.01082992553711, 96.25397443771362, 82.11332082748413, 71.06868505477905]) so, In my operation “PyTorch CPU”:“ONNX CPU Quantized" promote 1.32X but in the guidance “PyTorch CPU”:“ONNX CPU Quantized" promote 5.78X Why didn't it increase so many times confused me?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9509/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9508
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9508/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9508/comments
https://api.github.com/repos/huggingface/transformers/issues/9508/events
https://github.com/huggingface/transformers/issues/9508
783,200,011
MDU6SXNzdWU3ODMyMDAwMTE=
9,508
bug in distributed codes AssertionError: Default process group is not initialized
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,619
1,619
NONE
null
Hi I am using transformers 3.5.1, on distributed fashion on multiple gpus with pytorch 1.6 and python=3.7 I am running: python -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=1 --master_addr=$host --master_port=$port finetune_trainer.py config.json Huggingface codes only work wiht distributed fashion when all gpus are relying on one machine. If user wants to run two copy of codes on two machines, since in the codes it decides based on local_rank not rank, the local_rank for bot copies would be zero. could you have a look please? thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9508/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9508/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9507
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9507/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9507/comments
https://api.github.com/repos/huggingface/transformers/issues/9507/events
https://github.com/huggingface/transformers/pull/9507
783,197,204
MDExOlB1bGxSZXF1ZXN0NTUyNTk2MDAz
9,507
Remove tolerance + drop_rows_to_fit by default
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The tests look ok to me!", "@NielsRogge removed the `drop_rows_to_fit` attribute in the last commit." ]
1,610
1,610
1,610
MEMBER
null
Please take a look @NielsRogge. I'm setting `drop_rows_to_fit=True` when the user wants that truncation. I don't think that attribute really means anything anymore given the way we handle truncation in the encoding methods, so I think it can be removed altogether. Regarding the integration tests, I finally chose to go with a per-test tolerance instead of a relative tolerance, as the TAPAS model can output very large negative numbers; for example for the `test_inference_question_answering_head_conversational` test: ```py expected_tensor = torch.tensor( [ [ -9997.22461, -9997.22461, -9997.22461, -9997.22461, -9997.22461, -9997.22461, -9997.22461, -9997.22461, -9997.22461, -16.2628059, -10004.082, 15.4330549, 15.4330549, 15.4330549, -9990.42, -16.3270779, -16.3270779, -16.3270779, -16.3270779, -16.3270779, -10004.8506, ] ], device=torch_device, ) ``` I think in these cases it's helpful to see what difference we're looking at directly in the test, and I'm not sure a relative difference would handle such ranges, but I may be mistaken here.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9507/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9507", "html_url": "https://github.com/huggingface/transformers/pull/9507", "diff_url": "https://github.com/huggingface/transformers/pull/9507.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9507.patch", "merged_at": 1610370162000 }
https://api.github.com/repos/huggingface/transformers/issues/9506
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9506/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9506/comments
https://api.github.com/repos/huggingface/transformers/issues/9506/events
https://github.com/huggingface/transformers/issues/9506
783,175,971
MDU6SXNzdWU3ODMxNzU5NzE=
9,506
Model previews not working for models that require MecabTokenizer
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "Can we add those `pip install -e .[ja]` dependencies to the hosted Inference API @Narsil?", "@julien-c I'm not sure if that is possible, but perhaps this can even be derived from the model card? If a model card specifies Japanese, then the env could include the `[ja]` option.", "If I'm not mistaken all models run in the same env so it's probably not an issue to add a dependency, but I'll let @Narsil answer!", "Yes I created a patch for this, should be up soon.", "and up !", "Confirmed it works, thanks for the quick fix!" ]
1,610
1,610
1,610
COLLABORATOR
null
As brought up [on Twitter](https://twitter.com/polm23/status/1348520920948695043) by user pol23 @polm, Japanese (and other?) models do not work at all on the model page. They will throw an error `You need to install fugashi to use MecabTokenizer.See https://pypi.org/project/fugashi/ for installation.` Perhaps the environment that these models run in should include all optional dependencies, too? You can try it yourself by picking [a model](https://huggingface.co/daigo/bert-base-japanese-sentiment?text=I+likw+po) and trying the inference widget.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9506/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9506/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9505
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9505/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9505/comments
https://api.github.com/repos/huggingface/transformers/issues/9505/events
https://github.com/huggingface/transformers/pull/9505
783,157,029
MDExOlB1bGxSZXF1ZXN0NTUyNTYyODAw
9,505
Fix cardinality
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for fixing!" ]
1,610
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? Fix the cardinality computation in the TF Trainer. Fix issue #9495
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9505/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9505", "html_url": "https://github.com/huggingface/transformers/pull/9505", "diff_url": "https://github.com/huggingface/transformers/pull/9505.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9505.patch", "merged_at": 1610376139000 }
https://api.github.com/repos/huggingface/transformers/issues/9504
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9504/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9504/comments
https://api.github.com/repos/huggingface/transformers/issues/9504/events
https://github.com/huggingface/transformers/pull/9504
783,150,819
MDExOlB1bGxSZXF1ZXN0NTUyNTU3NTQy
9,504
Fix template
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? Fix the template as stated in https://github.com/huggingface/transformers/pull/9482#issuecomment-757496368
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9504/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9504", "html_url": "https://github.com/huggingface/transformers/pull/9504", "diff_url": "https://github.com/huggingface/transformers/pull/9504.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9504.patch", "merged_at": 1610360485000 }
https://api.github.com/repos/huggingface/transformers/issues/9503
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9503/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9503/comments
https://api.github.com/repos/huggingface/transformers/issues/9503/events
https://github.com/huggingface/transformers/issues/9503
783,133,593
MDU6SXNzdWU3ODMxMzM1OTM=
9,503
torch.nn.modules.module.ModuleAttributeError: 'RecursiveScriptModule' object has no attribute 'resize_token_embeddings'
{ "login": "Mounika2405", "id": 33863436, "node_id": "MDQ6VXNlcjMzODYzNDM2", "avatar_url": "https://avatars.githubusercontent.com/u/33863436?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mounika2405", "html_url": "https://github.com/Mounika2405", "followers_url": "https://api.github.com/users/Mounika2405/followers", "following_url": "https://api.github.com/users/Mounika2405/following{/other_user}", "gists_url": "https://api.github.com/users/Mounika2405/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mounika2405/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mounika2405/subscriptions", "organizations_url": "https://api.github.com/users/Mounika2405/orgs", "repos_url": "https://api.github.com/users/Mounika2405/repos", "events_url": "https://api.github.com/users/Mounika2405/events{/privacy}", "received_events_url": "https://api.github.com/users/Mounika2405/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hi @Mounika2405. Were you able to find a solution for this issue? I am facing a similar issue with another torch_script model", "Facing the same issue. " ]
1,610
1,624
1,619
NONE
null
## Environment info - `transformers` version: 2.0.0 (tried with 4.1.1 as well) - Python version: 3.6.9 - PyTorch version (GPU?): 1.7(False) - Tensorflow version (GPU?): 1.14.0(False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No @LysandreJik @mfuntowicz ## Information Model I am using: GPT2 The tasks I am working on is: * Question generation given a paragraph, clue, style, and answer The problem arises when using: * Torchscript version of fine-tuned GPT2. I have an inference script in which I load the pre-trained tokenizer and add special tokens to it. I resize the token embeddings using the model.resize_token_embeddings() function after adding the special tokens. It works fine for the original PyTorch GPT2 model but fails for the traced(Torchscript) model. The code snippet is as follows: tokenizer = GPT2Tokenizer.from_pretrained(args.model_name_or_path) if args.model_name != "": model = GPT2LMHeadModel.from_pretrained(args.model_name) else: if args.torchscript: model = torch.jit.load(args.ts_model_name_or_path) else: model = GPT2LMHeadModel.from_pretrained(args.model_name_or_path) tokenizer.add_tokens(SPECIAL_TOKENS) model.resize_token_embeddings(len(tokenizer)) Following is the error stack trace: Traceback (most recent call last): File "QG_gpt2_generate.py", line 5, in <module> run() File "/content/drive/MyDrive/home/FQG/src/model/FactorizedQG/GPT2_QG/interact.py", line 231, in run model.resize_token_embeddings(len(tokenizer)) File "/usr/local/lib/python3.6/dist-packages/torch/jit/_script.py", line 558, in __getattr__ return super(RecursiveScriptModule, self).__getattr__(attr) File "/usr/local/lib/python3.6/dist-packages/torch/jit/_script.py", line 288, in __getattr__ return super(ScriptModule, self).__getattr__(attr) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 779, in __getattr__ type(self).__name__, name)) torch.nn.modules.module.ModuleAttributeError: 'RecursiveScriptModule' object has no attribute 'resize_token_embeddings' Is there any other way in which I can perform the same operation of resizing for torchscript models? Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9503/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9502
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9502/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9502/comments
https://api.github.com/repos/huggingface/transformers/issues/9502/events
https://github.com/huggingface/transformers/issues/9502
783,121,669
MDU6SXNzdWU3ODMxMjE2Njk=
9,502
RoBERTa tokenizer does not add start and end token at the beginning and end of the sentence
{ "login": "ameet-1997", "id": 18645407, "node_id": "MDQ6VXNlcjE4NjQ1NDA3", "avatar_url": "https://avatars.githubusercontent.com/u/18645407?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ameet-1997", "html_url": "https://github.com/ameet-1997", "followers_url": "https://api.github.com/users/ameet-1997/followers", "following_url": "https://api.github.com/users/ameet-1997/following{/other_user}", "gists_url": "https://api.github.com/users/ameet-1997/gists{/gist_id}", "starred_url": "https://api.github.com/users/ameet-1997/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ameet-1997/subscriptions", "organizations_url": "https://api.github.com/users/ameet-1997/orgs", "repos_url": "https://api.github.com/users/ameet-1997/repos", "events_url": "https://api.github.com/users/ameet-1997/events{/privacy}", "received_events_url": "https://api.github.com/users/ameet-1997/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You are inspecting an input of the training datalaoder, which has been shuffled. Therefore you do not have the beginning of one of your original documents since by default, the script concatenates all your texts (after adding the special tokens at the beginning and the end) then splits the result in contiguous chunks of length `max_seq_length` (unspecified here so the default of a roberta-base model).\r\n\r\nSo the text you are inspecting is inside one of your original documents, which is why it doesn't have that <s> and </s>\r\n\r\nYou can use the `line_by_line` option to change the script preprocessing to consider each line of your dataset as a separate entry (and apply padding or truncation to always have them of `max_seq_length`), in which case every input will have that `</s>` at the beginning.", "Thanks for the information, this makes sense!" ]
1,610
1,610
1,610
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.2.0dev0 - Platform: Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-redhat-7.8-Verona - Python version: 3.6.12 - PyTorch version (GPU?): 1.7.1+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @mfuntowicz @sgugger <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner ray/raytune: @richardliaw @amogkam tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): RoBERTa The problem arises when using: * [Yes] the official example scripts: (give details below) The problem occurs when running the `run_mlm.py` file in `examples/language-modeling` * [Yes] my own modified scripts: (give details below) The tasks I am working on is: Language Modeling ## To reproduce Steps to reproduce the behavior: 1. Run `python -m pdb examples/language-modeling/run_mlm.py --train_file= wikitext --dataset_config_name wikitext-2-raw-v1 --output_dir=/tmp/debug --model_type=roberta --config_name=roberta-base --tokenizer_name=roberta-base --learning_rate 1e-4 --num_train_epochs 2 --warmup_steps 10000 --do_train --save_steps 10000 --per_device_train_batch_size 2 --overwrite_output_dir` 2. Insert breakpoint using the following command: (At line `if self.use_amp`):`b ../../src/transformers/trainer.py:1138` 3. Press `c` 4. `print(self.tokenizer.decode(inputs['input_ids'][0]))` The output will look like the following: > ' Photograph : The Very Best of Ringo Starr, and as a bonus track<mask> his<mask>astered<mask> studio album Goodnight Vienna. Since his return<mask> touring in 1989, Starr has performed " Back Off<mask>ogaloo " regularly in concert with the various incarnations of his All @-@ Starr Band. </s> > <s> Commentators have interpreted the song,<mask> particularly this statement<mask> as an<mask><mask> Starr on his former Beatles band facet<mask> McCartney. Starr<mask> denied<mask> such interpretation, instead " claiming that the song was inspired by Bolan and nothing more ", Beatles bi<mask> Robert Rodriguez writes. Starr had publicly criticised<mask>\'s solo albums McCartney<mask> 1970 ) and Ram ( 1971 ) on' <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Ideally the first token should have been `<s>` in RoBERTa because that is the start token. And the last token should have been `</s>` because that is the ending token. But those are not the start or end tokens. Wouldn't this be a departure from the implementation in the RoBERTa paper? PS: Please ignore the strikethrough. No idea why that is appearing. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9502/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9501
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9501/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9501/comments
https://api.github.com/repos/huggingface/transformers/issues/9501/events
https://github.com/huggingface/transformers/issues/9501
783,079,518
MDU6SXNzdWU3ODMwNzk1MTg=
9,501
Question About Attention Score Computation & Intuition
{ "login": "rezhv", "id": 56566565, "node_id": "MDQ6VXNlcjU2NTY2NTY1", "avatar_url": "https://avatars.githubusercontent.com/u/56566565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rezhv", "html_url": "https://github.com/rezhv", "followers_url": "https://api.github.com/users/rezhv/followers", "following_url": "https://api.github.com/users/rezhv/following{/other_user}", "gists_url": "https://api.github.com/users/rezhv/gists{/gist_id}", "starred_url": "https://api.github.com/users/rezhv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rezhv/subscriptions", "organizations_url": "https://api.github.com/users/rezhv/orgs", "repos_url": "https://api.github.com/users/rezhv/repos", "events_url": "https://api.github.com/users/rezhv/events{/privacy}", "received_events_url": "https://api.github.com/users/rezhv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "HI @rezhv , that's a great question, I would suggest you ask such general questions on the forum https://discuss.huggingface.co/ and use issues to report bugs and to discuss new features :-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,619
1,619
NONE
null
When it comes to transformers, the Query and Key matrices are what determine the attention scores. Here is a nice visual taken from Jay [Alammar's blog post ](http://jalammar.github.io/illustrated-transformer/)on transformers that illustrates how attention scores are computed: ![self-attention_softmax](https://user-images.githubusercontent.com/56566565/104143061-e64b3e00-5372-11eb-8b0f-2c9568988aaa.png) As you can see the attention score depends solely on qi and kj vectors multiplied with no additional parameters. However each of these two vectors are calculated through a linear layer **which had the word embedding (+positional) of just 1 word as input.** My question is: how can the network assign attention scores meaningfully if q and k are computed without looking at different parts of the sentence other than their corresponding word? **How can the network produce k and q vectors that when multiplied represent a meaningful attention score if k and q are computed based on a single word embedding?** lets say I want to process this sentence: The man ate the apple; It didn't taste good. When calculating the attention scores for the word 'it', how would the model know to assign a higher attention score to 'apple' (it refers to the apple) than to 'man' or basically any other word? The model had no way of understanding the context of the sentence because q and k are calculated solely based on the embedding of one word and not the sentence as a whole. q for 'it' is computed from the apple's embedding and the same goes for k for 'apple'. The two vectors are then multiplied to get the attention score. wouldn't this mean that if the two words are present in a different sentence but with the same distance the attention score between the two would be identical in the second sentence? What makes sense to me is the classic approach to attention models. Look at the following visual from Andrew NG's deep learning specialization. ![eac4f222d9d468a0c29a71a3830a5c60-c5w3l08attentionmodel-3-638](https://user-images.githubusercontent.com/56566565/104143423-3971c080-5374-11eb-88e0-78454c3b795b.jpg) Here the attention scores are calculated using the hidden states at that timestamp. The hidden states are calculated with FC layers in a bidirectional RNN. In other words a hidden state at a certain timestamp is influenced by the words that come after and before it, So it makes sense that the model is able to calculate attention scores there.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9501/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9500
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9500/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9500/comments
https://api.github.com/repos/huggingface/transformers/issues/9500/events
https://github.com/huggingface/transformers/issues/9500
782,961,139
MDU6SXNzdWU3ODI5NjExMzk=
9,500
Question on the example script run_glue.py for text classification
{ "login": "xiaolin-cheng", "id": 16944705, "node_id": "MDQ6VXNlcjE2OTQ0NzA1", "avatar_url": "https://avatars.githubusercontent.com/u/16944705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiaolin-cheng", "html_url": "https://github.com/xiaolin-cheng", "followers_url": "https://api.github.com/users/xiaolin-cheng/followers", "following_url": "https://api.github.com/users/xiaolin-cheng/following{/other_user}", "gists_url": "https://api.github.com/users/xiaolin-cheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiaolin-cheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaolin-cheng/subscriptions", "organizations_url": "https://api.github.com/users/xiaolin-cheng/orgs", "repos_url": "https://api.github.com/users/xiaolin-cheng/repos", "events_url": "https://api.github.com/users/xiaolin-cheng/events{/privacy}", "received_events_url": "https://api.github.com/users/xiaolin-cheng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hi @xiaolin-cheng \r\n\r\n`run_glue.py` fine-tunes the whole model, it doesn't freeze anything. You would need to manually freeze the base model, you could do this after loading the `ForSequenceClassification` and then freeze the base model. For example for `BertForSequenceClassification` you can access the base model using `model.bert`.\r\n\r\n```python\r\nmodel = BertForSequenceClassification.from_pretrained('bert-base-uncased')\r\n\r\nfor param in model.bert.parameters():\r\n param.requires_grad = False\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,619
1,619
NONE
null
When we run this script to train a text classification, are the weights of the underlying language model frozen and not updated? Whether they are fixed or trainable, is there any config to change the behavior of the training process? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9500/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9499
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9499/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9499/comments
https://api.github.com/repos/huggingface/transformers/issues/9499/events
https://github.com/huggingface/transformers/pull/9499
782,945,962
MDExOlB1bGxSZXF1ZXN0NTUyMzc3MDcx
9,499
[ray] add maintainers for Ray / Tune
{ "login": "richardliaw", "id": 4529381, "node_id": "MDQ6VXNlcjQ1MjkzODE=", "avatar_url": "https://avatars.githubusercontent.com/u/4529381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richardliaw", "html_url": "https://github.com/richardliaw", "followers_url": "https://api.github.com/users/richardliaw/followers", "following_url": "https://api.github.com/users/richardliaw/following{/other_user}", "gists_url": "https://api.github.com/users/richardliaw/gists{/gist_id}", "starred_url": "https://api.github.com/users/richardliaw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richardliaw/subscriptions", "organizations_url": "https://api.github.com/users/richardliaw/orgs", "repos_url": "https://api.github.com/users/richardliaw/repos", "events_url": "https://api.github.com/users/richardliaw/events{/privacy}", "received_events_url": "https://api.github.com/users/richardliaw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
COLLABORATOR
null
# What does this PR do? Adds maintainers for Ray / Raytune integration! cc @sgugger <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9499/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9499", "html_url": "https://github.com/huggingface/transformers/pull/9499", "diff_url": "https://github.com/huggingface/transformers/pull/9499.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9499.patch", "merged_at": 1610328857000 }
https://api.github.com/repos/huggingface/transformers/issues/9498
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9498/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9498/comments
https://api.github.com/repos/huggingface/transformers/issues/9498/events
https://github.com/huggingface/transformers/issues/9498
782,923,480
MDU6SXNzdWU3ODI5MjM0ODA=
9,498
Can not load a saved tokenizer using AutoTokenizer
{ "login": "hadifar", "id": 7101287, "node_id": "MDQ6VXNlcjcxMDEyODc=", "avatar_url": "https://avatars.githubusercontent.com/u/7101287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hadifar", "html_url": "https://github.com/hadifar", "followers_url": "https://api.github.com/users/hadifar/followers", "following_url": "https://api.github.com/users/hadifar/following{/other_user}", "gists_url": "https://api.github.com/users/hadifar/gists{/gist_id}", "starred_url": "https://api.github.com/users/hadifar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hadifar/subscriptions", "organizations_url": "https://api.github.com/users/hadifar/orgs", "repos_url": "https://api.github.com/users/hadifar/repos", "events_url": "https://api.github.com/users/hadifar/events{/privacy}", "received_events_url": "https://api.github.com/users/hadifar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @hadifar \r\n\r\nThe `AutoTokenizer` needs to know the model type to load the correct `Tokenizer` class, and that information is stored in the `config` file, so if `config.json` is not present it can not load the correct class. And `config.json` is saved when saving the model using `.save_pretrained` method. To load a separately saved tokenizer you should use the respective tokenizer class " ]
1,610
1,610
1,610
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: ubuntu 18.04 - Python version: 3.8 - PyTorch version (GPU?): No - Tensorflow version (GPU?): No - Using GPU in script?: No - Using distributed or parallel set-up in script?: No @mfuntowicz @patrickvonplaten ## Information I'm using following code to save and load t5 tokenizer: ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('t5-small') tokenizer.add_tokens(['<sep>', '<hl>']) tokenizer.save_pretrained('./t5-tokenizer-test/') tokenizer2 = AutoTokenizer.from_pretrained('./t5-tokenizer-test/') ``` But it throws the following exception: During handling of the above exception, another exception occurred: ``` Traceback (most recent call last): File "/home/amir/PycharmProjects/question_generation/testifier.py", line 19, in <module> tokenizer2 = AutoTokenizer.from_pretrained('./t5-tokenizer-test/') File "/home/amir/PycharmProjects/question_generation/venv/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 345, in from_pretrained config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) File "/home/amir/PycharmProjects/question_generation/venv/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 349, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/amir/PycharmProjects/question_generation/venv/lib/python3.7/site-packages/transformers/configuration_utils.py", line 418, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for './t5-tokenizer-test/'. Make sure that: - './t5-tokenizer-test/' is a correct model identifier listed on 'https://huggingface.co/models' - or './t5-tokenizer-test/' is the correct path to a directory containing a config.json file ``` If I replace Autotokenizer with T5Tokenizer the issue will be fixed: ``` from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained('t5-small') tokenizer.add_tokens(['<sep>', '<hl>']) tokenizer.save_pretrained('./t5-tokenizer-test/') tokenizer2 = T5Tokenizer.from_pretrained('./t5-tokenizer-test/') ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9498/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9497
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9497/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9497/comments
https://api.github.com/repos/huggingface/transformers/issues/9497/events
https://github.com/huggingface/transformers/pull/9497
782,831,707
MDExOlB1bGxSZXF1ZXN0NTUyMjkzNDAy
9,497
[TFBart] Split TF-Bart
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Awesome work!! Just left few smalll comments. I think we should first find a proper fix #9478 and then merging this one. Switching on/off some tests everytime we touch a model is really not a long term solution, I think a proper template as to be stated first and then afterwards we do the models.\r\n\r\nIMO this PR should be merged and the s2s fix should be applied afterward as said offline. This PR is blocking a new release currently", "> IMO this PR should be merged and the s2s fix should be applied afterward as said offline. This PR is blocking a new release currently\r\n\r\nOk, nevermind, I didn't know you wanted to have it in the next release." ]
1,610
1,610
1,610
MEMBER
null
# What does this PR do? TF mirror of: #9343 - Exact same changes as in #9343 - Docs are improved - TFBlenderbot gets a better integration tests - tf_saved_model & tf_serving tests are disabled for now and should ideally be fixed in https://github.com/huggingface/transformers/pull/9478 after merging this one ## After PR is merged TODO: - [x] Open issue about `facebook/blenderbot_small-90M` tokenizer - cannot download files from hub. Weird issue
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9497/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9497", "html_url": "https://github.com/huggingface/transformers/pull/9497", "diff_url": "https://github.com/huggingface/transformers/pull/9497.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9497.patch", "merged_at": 1610413593000 }
https://api.github.com/repos/huggingface/transformers/issues/9496
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9496/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9496/comments
https://api.github.com/repos/huggingface/transformers/issues/9496/events
https://github.com/huggingface/transformers/issues/9496
782,707,731
MDU6SXNzdWU3ODI3MDc3MzE=
9,496
[make docs] please help make the validation process easier
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
null
[]
[ "I am not aware of anything that could make life easier on this as I would have implemented it/documented it if I knew of it. Your solution of creating a project with just two files does not work, as it would then very likely be impossible to import the file in question and sphinx needs to do that.\r\n\r\nI'm not aware of any software that lints properly the .rst. Happy to add any new functionality the doc styler that could help here (as this one runs fast and can be run on a given .py/.rst file) though it's medium priority in terms of development of the project (we do want users to be able to build the documentation smoothly and easily but there are other things more important).", "Thank you for feedback, @sgugger \r\n\r\n-------------------\r\n\r\nSo to disable `Warning, treated as error:` I need to drop `-W` in:\r\n\r\n```\r\ncd docs && make html SPHINXOPTS=\"-W\"\r\n```\r\n\r\nand then need to figure out how to skip highlighting as it re-works all files on every run.\r\n\r\n------------------\r\n\r\nI started looking at finding a single page linter that supports sphinx's custom parser. I will post my findings here:\r\n- https://pypi.org/project/restructuredtext-lint/ - says partially supports sphinx\r\n- https://pypi.org/project/doc8/ - supports sphinx, but may have its own demands\r\n\r\n", "Also did you know sphinx has parallel processing with `-j`?\r\n\r\nI added `-a -E`, which forces a full rebuild, just for the test so that we are comparing the same things.\r\n\r\n```\r\ntime make html SPHINXOPTS=\"-a -E\"\r\nreal 1m15.265s\r\nuser 1m15.114s\r\nsys 0m1.790s\r\n```\r\n\r\n```\r\ntime make html SPHINXOPTS=\"-a -E -j 6\"\r\nreal 0m39.555s\r\nuser 1m31.551s\r\nsys 0m7.608s\r\n```\r\n\r\nthis is almost twice as fast! \r\n\r\nIt seems that on my setup `-j 5` is just as fast but less heat get generated (41 sec).\r\n\r\nIt has `-j auto` - to use all cpu cores, but it's a bad idea, since it won't get any faster with 12 or more workers. Any number of workers beyond 5 on my setup provides a tiny speedup.\r\n\r\nDo you think it'd be a good idea to add say `SPHINXOPTS=\"-j 4\"` as the default?\r\n\r\n\r\n", "I filed a bug report https://github.com/sphinx-doc/sphinx/issues/8681 since if that `re-highlighting of all modules` stage gets fixed to not re-run on all modules when only 1 files is modified and I drop `-W` temporarily - then the rebuild should be almost instantaneous for a single modified file and thus we won't need to look for an outside linter.\r\n", "Last time I tried ot use multiprocessing I didn't get any speed up, but it might have been because I was trying the auto option. We can certainly try with 4 cores to begin with.", "Excellent!\r\n\r\nIn general `auto` is almost never a good option w/o knowing the user's setup. That's why I never use `make test`, which runs `pytest -n auto` - I have 12 cpu cores and it can't possibly run 12 workers on 2 gpus - the outcome is really bad.", "> I filed a bug report [sphinx-doc/sphinx#8681](https://github.com/sphinx-doc/sphinx/issues/8681) since if that `re-highlighting of all modules` stage gets fixed to not re-run on all modules when only 1 files is modified and I drop `-W` temporarily - then the rebuild should be almost instantaneous for a single modified file and thus we won't need to look for an outside linter.\r\n\r\nsphinx dev has fixed this issue in master, so now `make docs` for one modified file is blazingly fast - ~5sec.\r\n\r\nMost of the overhead is loading tf+pt I think.\r\n```\r\ntime python -c \"import torch, tensorflow\"\r\n2021-01-18 09:54:49.558444: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\r\n\r\nreal 0m1.810s\r\nuser 0m2.375s\r\nsys 0m1.219s\r\n```\r\n\r\nI personally am pretty happy with this outcome, so closing this ticket.", "Oh, that's very nice!" ]
1,610
1,611
1,610
CONTRIBUTOR
null
Writing serious documents in .rst is such a pain because the sphinx builder is terrible at times. If all goes well I can incrementally run `make docs` and it rebuilds just the modified page which is relatively quick, while it still re-runs highlighting on all pages (not needed for the doc I'm working on) But if a single error happens it rebuilds everything from scratch which takes forever and chances are that it fails again are very high. since half the time I have no idea what the error is. It's good if it even tells me the line number but sometimes it doesn't even give any context - what a horrible tool. So I have to do a lot of guessing and a lot of waiting and by the end of it I don't really want to finish the doc I was very inspired to write. There must be a better way to isolate just the page I'm working on. I don't care for cross references, I just want to be able to quickly validate that my page will "compile" and not error. For example how do I hack `make docs` to not do die on: ``` Warning, treated as error: ``` This is extremely painful, as after each error it rebuilds everything which takes forever. I think what would ease the process in this particular situation is to leave warnings as warnings only make them errors when I commit, and obviously on CI. So something that: * doesn't treat warnings as errors * doesn't rebuild everything if something failed in the previous run * doesn't re-run highlighting on all pages * ideally a way to work on just one page of my choice - surely it could detect the only modified file - but if it's too much to ask I would be happy to manually supply it Something like: ``` utils/checksingledoc.py file.rst ``` I don't know sphinx, so it's very hard for me to know what to propose. Perhaps it could build a project made of a single file (or several files) on the fly and that it could solve the problem? Of course, it will need to ignore cross-references as those won't be available and probably other features that I didn't think of. Or perhaps there is an existing 3rd party program that lints .rst in the same way shpinx does and could be configured to do things that will make the doc writing easier? Thank you! @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9496/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9495
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9495/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9495/comments
https://api.github.com/repos/huggingface/transformers/issues/9495/events
https://github.com/huggingface/transformers/issues/9495
782,671,748
MDU6SXNzdWU3ODI2NzE3NDg=
9,495
tf trainer dataset cardinality issue - potentially a bug
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I can confirm! Good catch!", "I resume the work on creating `test_trainer_tf.py`, I promise I will finish it this time. After that, it might be easier to catch the errors in `tf_trainer.py`.", "I take care of this!", "OK, @jplu . Thank you for letting me know about it (I did some check and didn't found it on master, so I thought it was not done yet).", "I encountered this issue while I was running the model.\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in __getattr__(self, item)\r\n 265 try:\r\n--> 266 return self.data[item]\r\n 267 except KeyError:\r\n\r\nKeyError: 'cardinality'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n```\r\n\r\nBelow is the code:\r\n\r\n```\r\nfrom transformers import TFAutoModelForSequenceClassification\r\n\r\nmodel = TFAutoModelForSequenceClassification.from_pretrained(\r\n \"distilbert-base-uncased\")\r\n\r\nmodel.compile(optimizer=optimizer) # No loss argument!\r\n\r\nfrom transformers import TFTrainer, TFTrainingArguments\r\n\r\n\r\ntraining_args = TFTrainingArguments(\r\n output_dir=\"./sentiment_model\",\r\n per_device_train_batch_size=32,\r\n per_device_eval_batch_size=32,\r\n num_train_epochs=3,\r\n evaluation_strategy=\"steps\",\r\n eval_steps=500, # Adjust as needed\r\n save_total_limit=2,\r\n)\r\n\r\ntrainer = TFTrainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_tokenized,\r\n eval_dataset=val_tokenized,\r\n)\r\n\r\ntrainer.train() # <- where the error occured\r\n```", "It doesn't look the same issue as the original one. Could you open a new issue, and provide a simple/small dataset along the code to show the issue. Thank you.", "@ydshieh https://github.com/huggingface/transformers/issues/26632#issue-1929838712 " ]
1,610
1,696
1,610
COLLABORATOR
null
In `trainer_tf.py`, line 138, we have self.num_train_examples = self.train_dataset.cardinality(self.train_dataset).numpy() in the method def get_train_tfdataset(self) -> tf.data.Dataset: However, in the official tf documenation [cardinality](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cardinality), it is defined as: cardinality() which has no argument. I got the following error ``` File "/home/imo/Desktop/transformers/src/transformers/trainer_tf.py", line 138, in get_train_tfdataset self.num_train_examples = self.train_dataset.cardinality(self.train_dataset).numpy() TypeError: cardinality() takes 1 positional argument but 2 were given ``` I think the current version on master is a bug, which should be changed to self.train_dataset.cardinality().numpy() Could you confirm, @jplu? And if it is a bug, let's fix it. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9495/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9494
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9494/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9494/comments
https://api.github.com/repos/huggingface/transformers/issues/9494/events
https://github.com/huggingface/transformers/pull/9494
782,592,456
MDExOlB1bGxSZXF1ZXN0NTUyMTIwMTA0
9,494
New Updated DistilGPT-2 Finetuning and Generation
{ "login": "tripathiaakash", "id": 15000270, "node_id": "MDQ6VXNlcjE1MDAwMjcw", "avatar_url": "https://avatars.githubusercontent.com/u/15000270?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tripathiaakash", "html_url": "https://github.com/tripathiaakash", "followers_url": "https://api.github.com/users/tripathiaakash/followers", "following_url": "https://api.github.com/users/tripathiaakash/following{/other_user}", "gists_url": "https://api.github.com/users/tripathiaakash/gists{/gist_id}", "starred_url": "https://api.github.com/users/tripathiaakash/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tripathiaakash/subscriptions", "organizations_url": "https://api.github.com/users/tripathiaakash/orgs", "repos_url": "https://api.github.com/users/tripathiaakash/repos", "events_url": "https://api.github.com/users/tripathiaakash/events{/privacy}", "received_events_url": "https://api.github.com/users/tripathiaakash/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Failing Test is fixed on master I believe" ]
1,610
1,610
1,610
CONTRIBUTOR
null
https://github.com/huggingface/transformers/pull/3177 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 --> @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9494/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9494/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9494", "html_url": "https://github.com/huggingface/transformers/pull/9494", "diff_url": "https://github.com/huggingface/transformers/pull/9494.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9494.patch", "merged_at": 1610372079000 }
https://api.github.com/repos/huggingface/transformers/issues/9493
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9493/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9493/comments
https://api.github.com/repos/huggingface/transformers/issues/9493/events
https://github.com/huggingface/transformers/pull/9493
782,552,980
MDExOlB1bGxSZXF1ZXN0NTUyMDg5MTM3
9,493
Added a new DistilGPT2 fine-tuning and generation Tutorial
{ "login": "tripathiaakash", "id": 15000270, "node_id": "MDQ6VXNlcjE1MDAwMjcw", "avatar_url": "https://avatars.githubusercontent.com/u/15000270?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tripathiaakash", "html_url": "https://github.com/tripathiaakash", "followers_url": "https://api.github.com/users/tripathiaakash/followers", "following_url": "https://api.github.com/users/tripathiaakash/following{/other_user}", "gists_url": "https://api.github.com/users/tripathiaakash/gists{/gist_id}", "starred_url": "https://api.github.com/users/tripathiaakash/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tripathiaakash/subscriptions", "organizations_url": "https://api.github.com/users/tripathiaakash/orgs", "repos_url": "https://api.github.com/users/tripathiaakash/repos", "events_url": "https://api.github.com/users/tripathiaakash/events{/privacy}", "received_events_url": "https://api.github.com/users/tripathiaakash/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The tutorial has issues due old code. Will make another pull request with new code." ]
1,610
1,610
1,610
CONTRIBUTOR
null
https://github.com/huggingface/transformers/pull/3177 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 --> @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9493/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9493", "html_url": "https://github.com/huggingface/transformers/pull/9493", "diff_url": "https://github.com/huggingface/transformers/pull/9493.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9493.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/9492
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9492/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9492/comments
https://api.github.com/repos/huggingface/transformers/issues/9492/events
https://github.com/huggingface/transformers/issues/9492
782,531,465
MDU6SXNzdWU3ODI1MzE0NjU=
9,492
Problems with using LongFormer
{ "login": "joy20182018", "id": 37768264, "node_id": "MDQ6VXNlcjM3NzY4MjY0", "avatar_url": "https://avatars.githubusercontent.com/u/37768264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joy20182018", "html_url": "https://github.com/joy20182018", "followers_url": "https://api.github.com/users/joy20182018/followers", "following_url": "https://api.github.com/users/joy20182018/following{/other_user}", "gists_url": "https://api.github.com/users/joy20182018/gists{/gist_id}", "starred_url": "https://api.github.com/users/joy20182018/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joy20182018/subscriptions", "organizations_url": "https://api.github.com/users/joy20182018/orgs", "repos_url": "https://api.github.com/users/joy20182018/repos", "events_url": "https://api.github.com/users/joy20182018/events{/privacy}", "received_events_url": "https://api.github.com/users/joy20182018/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @joy20182018,\r\n\r\nWe cannot guarantee that our library is in sync with other libraries like `https://github.com/allenai/longformer`. Please make sure you follow the advice as written on: https://huggingface.co/transformers/model_doc/longformer.html", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,619
1,619
NONE
null
I am according to the official longformer lot (https://github.com/allenai/longformer) provides methods to use, I use in the code of parts as follows: ` tokenizer_class = BertTokenizer model_class = LongformerModel # directory is fine pretrained_weights = self.pretrainedBertPath tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained('longformer-base-4096', gradient_checkpointing=True) # add_special_tokens will add start and end token input_ids = torch.tensor([tokenizer.encode(text, add_special_tokens=False)]) ` This warning appeared: ` Some weights of the model checkpoint at longformer-base-4096 were not used when initializing LongformerModel: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.self.query_global.weight', 'roberta.encoder.layer.0.attention.self.query_global.bias', 'roberta.encoder.layer.0.attention.self.key_global.weight', 'roberta.encoder.layer.0.attention.self.key_global.bias', 'roberta.encoder.layer.0.attention.self.value_global.weight', 'roberta.encoder.layer.0.attention.self.value_global.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.attention.self.query_global.weight', 'roberta.encoder.layer.1.attention.self.query_global.bias', 'roberta.encoder.layer.1.attention.self.key_global.weight', 'roberta.encoder.layer.1.attention.self.key_global.bias', 'roberta.encoder.layer.1.attention.self.value_global.weight', 'roberta.encoder.layer.1.attention.self.value_global.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.2.attention.self.query_global.weight', 'roberta.encoder.layer.2.attention.self.query_global.bias', 'roberta.encoder.layer.2.attention.self.key_global.weight', 'roberta.encoder.layer.2.attention.self.key_global.bias', 'roberta.encoder.layer.2.attention.self.value_global.weight', 'roberta.encoder.layer.2.attention.self.value_global.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.3.attention.self.query_global.weight', 'roberta.encoder.layer.3.attention.self.query_global.bias', 'roberta.encoder.layer.3.attention.self.key_global.weight', 'roberta.encoder.layer.3.attention.self.key_global.bias', 'roberta.encoder.layer.3.attention.self.value_global.weight', 'roberta.encoder.layer.3.attention.self.value_global.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.4.attention.self.query_global.weight', 'roberta.encoder.layer.4.attention.self.query_global.bias', 'roberta.encoder.layer.4.attention.self.key_global.weight', 'roberta.encoder.layer.4.attention.self.key_global.bias', 'roberta.encoder.layer.4.attention.self.value_global.weight', 'roberta.encoder.layer.4.attention.self.value_global.bias', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.5.attention.self.query_global.weight', 'roberta.encoder.layer.5.attention.self.query_global.bias', 'roberta.encoder.layer.5.attention.self.key_global.weight', 'roberta.encoder.layer.5.attention.self.key_global.bias', 'roberta.encoder.layer.5.attention.self.value_global.weight', 'roberta.encoder.layer.5.attention.self.value_global.bias', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.6.attention.self.query_global.weight', 'roberta.encoder.layer.6.attention.self.query_global.bias', 'roberta.encoder.layer.6.attention.self.key_global.weight', 'roberta.encoder.layer.6.attention.self.key_global.bias', 'roberta.encoder.layer.6.attention.self.value_global.weight', 'roberta.encoder.layer.6.attention.self.value_global.bias', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.7.attention.self.query_global.weight', 'roberta.encoder.layer.7.attention.self.query_global.bias', 'roberta.encoder.layer.7.attention.self.key_global.weight', 'roberta.encoder.layer.7.attention.self.key_global.bias', 'roberta.encoder.layer.7.attention.self.value_global.weight', 'roberta.encoder.layer.7.attention.self.value_global.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.self.query_global.weight', 'roberta.encoder.layer.8.attention.self.query_global.bias', 'roberta.encoder.layer.8.attention.self.key_global.weight', 'roberta.encoder.layer.8.attention.self.key_global.bias', 'roberta.encoder.layer.8.attention.self.value_global.weight', 'roberta.encoder.layer.8.attention.self.value_global.bias', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.9.attention.self.query_global.weight', 'roberta.encoder.layer.9.attention.self.query_global.bias', 'roberta.encoder.layer.9.attention.self.key_global.weight', 'roberta.encoder.layer.9.attention.self.key_global.bias', 'roberta.encoder.layer.9.attention.self.value_global.weight', 'roberta.encoder.layer.9.attention.self.value_global.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.10.attention.self.query_global.weight', 'roberta.encoder.layer.10.attention.self.query_global.bias', 'roberta.encoder.layer.10.attention.self.key_global.weight', 'roberta.encoder.layer.10.attention.self.key_global.bias', 'roberta.encoder.layer.10.attention.self.value_global.weight', 'roberta.encoder.layer.10.attention.self.value_global.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.encoder.layer.11.attention.self.query_global.weight', 'roberta.encoder.layer.11.attention.self.query_global.bias', 'roberta.encoder.layer.11.attention.self.key_global.weight', 'roberta.encoder.layer.11.attention.self.key_global.bias', 'roberta.encoder.layer.11.attention.self.value_global.weight', 'roberta.encoder.layer.11.attention.self.value_global.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight'] - This IS expected if you are initializing LongformerModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing LongformerModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of LongformerModel were not initialized from the model checkpoint at longformer-base-4096 and are newly initialized: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.self.query_global.weight', 'encoder.layer.0.attention.self.query_global.bias', 'encoder.layer.0.attention.self.key_global.weight', 'encoder.layer.0.attention.self.key_global.bias', 'encoder.layer.0.attention.self.value_global.weight', 'encoder.layer.0.attention.self.value_global.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.self.query_global.weight', 'encoder.layer.1.attention.self.query_global.bias', 'encoder.layer.1.attention.self.key_global.weight', 'encoder.layer.1.attention.self.key_global.bias', 'encoder.layer.1.attention.self.value_global.weight', 'encoder.layer.1.attention.self.value_global.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.self.query_global.weight', 'encoder.layer.2.attention.self.query_global.bias', 'encoder.layer.2.attention.self.key_global.weight', 'encoder.layer.2.attention.self.key_global.bias', 'encoder.layer.2.attention.self.value_global.weight', 'encoder.layer.2.attention.self.value_global.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.self.query_global.weight', 'encoder.layer.3.attention.self.query_global.bias', 'encoder.layer.3.attention.self.key_global.weight', 'encoder.layer.3.attention.self.key_global.bias', 'encoder.layer.3.attention.self.value_global.weight', 'encoder.layer.3.attention.self.value_global.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.self.query_global.weight', 'encoder.layer.4.attention.self.query_global.bias', 'encoder.layer.4.attention.self.key_global.weight', 'encoder.layer.4.attention.self.key_global.bias', 'encoder.layer.4.attention.self.value_global.weight', 'encoder.layer.4.attention.self.value_global.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.self.query_global.weight', 'encoder.layer.5.attention.self.query_global.bias', 'encoder.layer.5.attention.self.key_global.weight', 'encoder.layer.5.attention.self.key_global.bias', 'encoder.layer.5.attention.self.value_global.weight', 'encoder.layer.5.attention.self.value_global.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.self.query_global.weight', 'encoder.layer.6.attention.self.query_global.bias', 'encoder.layer.6.attention.self.key_global.weight', 'encoder.layer.6.attention.self.key_global.bias', 'encoder.layer.6.attention.self.value_global.weight', 'encoder.layer.6.attention.self.value_global.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.self.query_global.weight', 'encoder.layer.7.attention.self.query_global.bias', 'encoder.layer.7.attention.self.key_global.weight', 'encoder.layer.7.attention.self.key_global.bias', 'encoder.layer.7.attention.self.value_global.weight', 'encoder.layer.7.attention.self.value_global.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.self.query_global.weight', 'encoder.layer.8.attention.self.query_global.bias', 'encoder.layer.8.attention.self.key_global.weight', 'encoder.layer.8.attention.self.key_global.bias', 'encoder.layer.8.attention.self.value_global.weight', 'encoder.layer.8.attention.self.value_global.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.self.query_global.weight', 'encoder.layer.9.attention.self.query_global.bias', 'encoder.layer.9.attention.self.key_global.weight', 'encoder.layer.9.attention.self.key_global.bias', 'encoder.layer.9.attention.self.value_global.weight', 'encoder.layer.9.attention.self.value_global.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.self.query_global.weight', 'encoder.layer.10.attention.self.query_global.bias', 'encoder.layer.10.attention.self.key_global.weight', 'encoder.layer.10.attention.self.key_global.bias', 'encoder.layer.10.attention.self.value_global.weight', 'encoder.layer.10.attention.self.value_global.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.self.query_global.weight', 'encoder.layer.11.attention.self.query_global.bias', 'encoder.layer.11.attention.self.key_global.weight', 'encoder.layer.11.attention.self.key_global.bias', 'encoder.layer.11.attention.self.value_global.weight', 'encoder.layer.11.attention.self.value_global.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ` I would like to ask whether the presence of this warning will affect the results?How can remove this warning?thinks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9492/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9492/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9491
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9491/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9491/comments
https://api.github.com/repos/huggingface/transformers/issues/9491/events
https://github.com/huggingface/transformers/pull/9491
782,451,225
MDExOlB1bGxSZXF1ZXN0NTUyMDA1ODA4
9,491
[trainer] round numbers in trainer state
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
CONTRIBUTOR
null
This PR rounds very long fractions in trainer state e.g., ``` {'loss': 14.846837043762207, 'learning_rate': 6e-06, 'epoch': 0.3333333333333333} ``` to: * epoch 2 decimals * loss 4 decimals resulting in: ``` {'loss': 14.8468, 'learning_rate': 6e-06, 'epoch': 0.33} ```` If you want any other small tweaks for me to add please let me know. Fixes: #9475 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9491/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9491/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9491", "html_url": "https://github.com/huggingface/transformers/pull/9491", "diff_url": "https://github.com/huggingface/transformers/pull/9491.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9491.patch", "merged_at": 1610389069000 }
https://api.github.com/repos/huggingface/transformers/issues/9490
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9490/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9490/comments
https://api.github.com/repos/huggingface/transformers/issues/9490/events
https://github.com/huggingface/transformers/issues/9490
782,405,705
MDU6SXNzdWU3ODI0MDU3MDU=
9,490
Using Huggingface library with DeepSpeed
{ "login": "exelents", "id": 12846582, "node_id": "MDQ6VXNlcjEyODQ2NTgy", "avatar_url": "https://avatars.githubusercontent.com/u/12846582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/exelents", "html_url": "https://github.com/exelents", "followers_url": "https://api.github.com/users/exelents/followers", "following_url": "https://api.github.com/users/exelents/following{/other_user}", "gists_url": "https://api.github.com/users/exelents/gists{/gist_id}", "starred_url": "https://api.github.com/users/exelents/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exelents/subscriptions", "organizations_url": "https://api.github.com/users/exelents/orgs", "repos_url": "https://api.github.com/users/exelents/repos", "events_url": "https://api.github.com/users/exelents/events{/privacy}", "received_events_url": "https://api.github.com/users/exelents/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "There is an open PR by @patil-suraj for T5 FP16 https://github.com/huggingface/transformers/pull/9487\r\n\r\nAnd here is an open PR for deepspeed Integration by @stas00 https://github.com/huggingface/transformers/pull/9211", "Thank you!", "Moving my answers from https://github.com/huggingface/transformers/pull/9487 as they were irrelevant to the PR itself:\r\n\r\nContext: @exelents struggles with making rtx-3090 work with pytorch and getting:\r\n```\r\nnvcc fatal : Unsupported gpu architecture 'compute_86\r\n```\r\n\r\nI explained how I made it to work.\r\n\r\n------------------------------\r\n\r\nSo I just did:\r\n```\r\npip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U\r\n```\r\nThen inside deepspeed github clone:\r\n```\r\nrm -rf build\r\nTORCH_CUDA_ARCH_LIST=\"6.1;8.6\" DS_BUILD_OPS=1 pip install --no-cache -v --disable-pip-version-check -e .\r\n```\r\n\r\n8.6 corresponds to rtx-3090 arch.\r\n\r\nYou can remove 6.1, this is just my 2nd 1070 card's arch.\r\n\r\nAnd you can remove `-e` if you don't want the develop install.\r\n\r\nYou can install it normally from pypi too:\r\n```\r\npip install deepspeed\r\n```\r\nit'll use PTX/JIT - I tested it to work just fine - the explicit way from source repo just builds the most optimal specific version for my hardware and pre-compiles all features, which takes much longer to build.\r\n\r\nNow inside the deepspeed PR brach https://github.com/huggingface/transformers/pull/9211, I run against a small t5 model:\r\n```\r\nexport BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 deepspeed --num_gpus=2 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 100 --n_val 100 --n_test 100 --deepspeed ds_config.json --fp16 --save_steps 1\r\n```\r\n\r\nAll works. t5-base works too.\r\n\r\nFollow my steps and see if you can use your rtx-3090 card first. Then compare to what you are doing differently.", "I have been using pt-nightly w/ rtx-3090 for the last 2 months, so yes it works. pt-1.7.1 doesn't work.\r\n\r\nFor building modules that build pytorch extensions like deepspeed, and apex and fairscale I use cuda-11.1. Let me know what you're trying to build and I will tell you how to do it. \r\n\r\nI'm still on 11.1 for building extensions, while I know 11.2 is out since I'm not sure 11.1 is compatible with 11.2. 11.0 is compatible with 11.1 so one can use it to build against pt-nightly w/ 11.0 after hacking the build script.", "> I change current cuda in my system to 11.0 version (cuda-toolkit-11-0 package in Ubuntu)\r\n> Then I install latest pytorch nighty by a command which you propose.\r\n\r\nI think the difference is that you need cuda-11.1 and not cuda-11.0 system-wide. This is where our setups diverge I think.\r\n\r\nCareful though, there is 11.2 out there. I'm on ubuntu, I'm not at all sure it'd work w/ pt-nightly, that's why I'm not upgrading mine.\r\n\r\nfor rtx-3090 to work\r\n- tf requires cuda-11.1\r\n- pt works with cuda-11.0\r\n- pt extensions need cuda-11.1: apex, fairscale, deepspeed, The first 2 require hacking their build script to support 11.1 w/ pt built w/ 11.0. deepspeed works out of box.\r\n\r\nnote: If someone reads this at a later time this will probably become incorrect once pt-nightly builds w/ cuda-11.2 - then you should be able to install 11.2 system-wide and hopefully the extensions will just work.", "Moving my posts from PR #9487 due to they are irrelevant.\r\n\r\nI don't know what is my problem. I even tried solution made by @stas00 in #9211 but I still have the same problem.\r\nMaybe problem is I built Pytorch from source and forgot some option? I did it because pypi's version don't support Cuda 11.2 and supported cuda 11.0 don't support my gpu (rtx 3090)?\r\nMaybe I need install something to enable fp16 support?", "> I use pytorch-nightly w/ cuda-11.0 which works with rtx-3090:\r\n> \r\n> ```\r\n> pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U\r\n> ```\r\n> \r\n> pt-nightly w/ cuda-11.2 should be released really soon now. You can track it here [pytorch/pytorch#50232](https://github.com/pytorch/pytorch/issues/50232)\r\n\r\nIt doesn't work. Pytorch nighty gets an error:\r\n`nvcc fatal : Unsupported gpu architecture 'compute_86`\r\nThat's mean that Cuda 11.0 doesn't support RTX 3090", "> I have been using it for the last 2 months, so yes it works. pt-1.7.1 doesn't work.\r\n> \r\n> Chances are that you have more than one pytorch installed and you have the non-nightly version loaded, check your:\r\n> \r\n> ```\r\n> print(torch.__version__)\r\n> ```\r\nhere is installed version: 1.8.0.dev20210109+cu110\r\n\r\n> For building extensions like deepspeed, and apex and fairscale I use cuda-11.1\r\nOkay, I'll try cuda 11.1, maybe it'll help.\r\n\r\n", "> Excellent. so what exactly do you do when you get that error?\r\n\r\nI change current cuda in my system to 11.0 version (cuda-toolkit-11-0 package in Ubuntu)\r\nThen I install latest pytorch nighty by a command which you propose.\r\nLatest, I run deepspeed training script proposed in #9211 issue. But in parameters there I change model name to t5-large and remove language parameters from parameters fed to model\r\n\r\nThis code in utils.py:\r\n```\r\n if data_args.src_lang is not None:\r\n self.dataset_kwargs[\"src_lang\"] = data_args.src_lang\r\n if data_args.tgt_lang is not None:\r\n self.dataset_kwargs[\"tgt_lang\"] = data_args.tgt_lang\r\n```\r\nThis way I get a minimal training code that should run T5-large. It don't take my dataset like in two examples I have shown before, but it should work. What I see is only error \"platform not supported\" on pt nighty build, or NaNs in output tensors in version which I installed from source.\r\n\r\nHere is training script:\r\nhttps://gist.github.com/exelents/9dd3e6dec64dc0d640b85a7e0cfa53e9", "Thank you for making this extra effort, @exelents! We got out of the PR's way now.\r\n\r\nPlease let me know if you had success with: https://github.com/huggingface/transformers/issues/9490#issuecomment-757346485 taking into account this: https://github.com/huggingface/transformers/issues/9490#issuecomment-757346664\r\n", "Okay, @stas00 \r\nI have installed cuda-toolkit-11-1, installed Pytorch nighty version:\r\n`pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U`\r\nThen I compiled from source DeepSpeed with Cuda 11.1 and Nvidia 8.6 computing platform:\r\n`TORCH_CUDA_ARCH_LIST=\"8.6\" DS_BUILD_OPS=1 pip install --no-cache -v --disable-pip-version-check -e .`\r\nNow everything works well, on the loss scale 256.0 \r\n\r\nupd: Also, I have installed Huggingface Transformers library from source, on brach from PR #9211 merged with branch from PR #9487\r\n\r\nIt seems that maybe DeepSpeed doesn't support new Cuda 11.2, or because I compiled PyTorch and deepspeed on Cuda 11.2 with TORCH_CUDA_ARCH_LIST=8.0 instead of 8.6.", "Glad to hear it works now!\r\n\r\n> TORCH_CUDA_ARCH_LIST=8.0\r\n\r\nMost likely this!\r\n\r\nWhen you build from source you need to add +PTX `8.0+PTX` for this newer arch to work if you don't specify 8.6 explicitly. It basically tells the cuda compiler to allow newer archs to be supported as well and will compile the extension during first run-time via JIT and cache and re-use it.\r\n\r\nThis is how pytorch nightly is built (i.e. it includes `+PTX`).\r\n\r\nThe whole PTX has been recently documented in https://pytorch.org/docs/master/cpp_extension.html#torch.utils.cpp_extension.CUDAExtension" ]
1,610
1,610
1,610
NONE
null
I'm not completely sure if it's this library problem, but maybe you could help. Trying to run T5-large from huggingface's library with DeepSpeed library I got a strange result. When I change mode to fp16 training loss is going to be NaN value, as well as some of tensors in model's features output. I'm not sure, can it be a Transformers library fault? Original example that I used utilizes pytorch_pretrained_bert, and it works well. Training with FP32 does not result any NaN troubles. I have some code, made out of DeepSpeedExamples code: https://github.com/exelents/try_t5 If somebody would like to help and try to run it, here is compiled binary dataset: https://drive.google.com/file/d/1oxCxYCuCWebmaUQ_s9il7EDBkisL7x-_/view?usp=sharing https://drive.google.com/file/d/1WCzxAnp2bEllbQ0_2d_6hoq5tQjxBFXh/view?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9490/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9489
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9489/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9489/comments
https://api.github.com/repos/huggingface/transformers/issues/9489/events
https://github.com/huggingface/transformers/pull/9489
782,342,741
MDExOlB1bGxSZXF1ZXN0NTUxOTE1NzUy
9,489
fix(wandb): fix config
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Failures in the tests are unrelated so merging." ]
1,610
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? Fix an issue introduced with PR #9441. There was just 2 lines to switch related to wandb config detection. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9489/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9489", "html_url": "https://github.com/huggingface/transformers/pull/9489", "diff_url": "https://github.com/huggingface/transformers/pull/9489.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9489.patch", "merged_at": 1610134323000 }
https://api.github.com/repos/huggingface/transformers/issues/9488
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9488/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9488/comments
https://api.github.com/repos/huggingface/transformers/issues/9488/events
https://github.com/huggingface/transformers/pull/9488
782,341,367
MDExOlB1bGxSZXF1ZXN0NTUxOTE0NjE0
9,488
Make doc styler detect lists on rst and better support for Windows
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
COLLABORATOR
null
# What does this PR do? The new lines for the lists in rst files were not actually added because I made a mistake, this PR fixes that. @patrickvonplaten it adds some new lines in the benchmarking files which I think are okay, but let me know if I should write some special code to get the scripts to ignore them. Also, changed the line that added the new lines before doc special words since it seems to be not working properly on Windows. Let's see if this version is better! (Failures are because master is red at the time of this PR.) Fixes #9438
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9488/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9488/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9488", "html_url": "https://github.com/huggingface/transformers/pull/9488", "diff_url": "https://github.com/huggingface/transformers/pull/9488.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9488.patch", "merged_at": 1610373222000 }
https://api.github.com/repos/huggingface/transformers/issues/9487
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9487/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9487/comments
https://api.github.com/repos/huggingface/transformers/issues/9487/events
https://github.com/huggingface/transformers/pull/9487
782,296,956
MDExOlB1bGxSZXF1ZXN0NTUxODc3ODcz
9,487
[T5] enable T5 fp16
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is great!", "Dear @patil-suraj \r\nYour PR works well for t5 model, thank you for your work.\r\nBut now I tried new t5 model version released recently by Google: google/t5-v1_1-xl\r\nThe same code after loading google/t5-v1_1-xl instead of t5-3b is going to return a lot \"overflow\" errors.\r\n\r\nCan you tell me, should your code fix fp16 on google/t5-v1_1-xl model? \r\nHere is training code:\r\nhttps://github.com/exelents/try_t5_qa\r\nRun ./run-qa-3b.sh\r\n\r\nUpd: I run my code on Transformers's branch from your current PR #9487 merged with PR #9211 needed for deepspeed integration.\r\nCan you confirm a problem, or it's just mine?", "> Dear @patil-suraj\r\n> Your PR works well for t5 model, thank you for your work.\r\n> But now I tried new t5 model version released recently by Google: google/t5-v1_1-xl\r\n> The same code after loading google/t5-v1_1-xl instead of t5-3b is going to return a lot \"overflow\" errors.\r\n> \r\n> Can you tell me, should your code fix fp16 on google/t5-v1_1-xl model?\r\n> Here is training code:\r\n> https://github.com/exelents/try_t5_qa\r\n> Run ./run-qa-3b.sh\r\n> \r\n> Upd: I run my code on Transformers's branch from your current PR #9487 merged with PR #9211 needed for deepspeed integration.\r\n> Can you confirm a problem, or it's just mine?\r\n\r\nHey @exelents, can you include a code snippet to reproduce your error as well as the full stack trace of your error?", "@patrickvonplaten , @exelents \r\n\r\nas stated in #9432 \r\n\r\nThis fix works for following models and versions, with apex `01` and `native amp`\r\n- T5v1: t5-small, t5-base, t5-large\r\n- T5v1_1: google/t5-v1_1-small, google/t5-v1_1-base\r\n- MT5: google/mt5-small, google/mt5-base\r\n\r\nJust did a small experiment with `t5-v1_1-large` and it still gives `nan` loss after 200 steps, so might not work for `xl`, \r\n\r\nalso, @exelents by overflow error do you mean the gradient overflow warning thrown by `apex` ?", "> @patrickvonplaten , @exelents\r\n> \r\n> as stated in #9432\r\n> \r\n> This fix works for following models and versions, with apex `01` and `native amp`\r\n> \r\n> * T5v1: t5-small, t5-base, t5-large\r\n> * T5v1_1: google/t5-v1_1-small, google/t5-v1_1-base\r\n> * MT5: google/mt5-small, google/mt5-base\r\n> \r\n> Just did a small experiment with `t5-v1_1-large` and it still gives `nan` loss after 200 steps, so might not work for `xl`,\r\n> \r\n> also, @exelents by overflow error do you mean the gradient overflow warning thrown by `apex` ?\r\n\r\nAh ok, we still see `nan's` with `t5-v1_1-large` then :-/ Do you think this could be fixed by adding one more clamp statement? @patil-suraj ", "> Hey @exelents, can you include a code snippet to reproduce your error as well as the full stack trace of your error?\r\nMy code is here:\r\nhttps://github.com/exelents/try_t5_qa\r\nIt requires deepspeed to run, as well as code from #9211 PR (deepspeed integration) be merged. Use run-qa-3b.sh to test.\r\n\r\nHere is error stack:\r\nhttps://gist.github.com/exelents/10f1d03e61059ddf2dfba7068114c93a\r\nLook at the end - we have a message after every step:\r\n`[2021-01-11 16:58:18,163] [INFO] [stage2.py:1361:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 256.0, reducing to 128.0`\r\nWait a second, I'll try to check loss value tensor.", "> Do you think this could be fixed by adding one more clamp statement?\r\n\r\nI'm again trying to locate where exactly in the model this happen. In case it's the same as above (first `inf` then `nan` ) then we could fix it by adding one more clamp", "I have checked a loss value, and it seems in is not NaN. It got values like \"48.7500\" or \"40.9688\" but there are vaild values. Despite that I see messages like \"OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1024.0, reducing to 512.0\", that it seems means that something bad happened with model's loss.", "> Attempted loss scale: 1024.0, reducing to 512.0\", that it seems means that something bad happened with model's loss.\r\n\r\nThose warnings don't mean anything went wrong, it's logical with dynamic loss scaling that some loss scale values are too big at the beginning of training." ]
1,610
1,610
1,610
MEMBER
null
# What does this PR do? This PR enables fp16 for T5 models, by clamping hidden states to the max value of the current data type. As detailed in #9295, T5 produces large (`inf`) activations at 3 places 1. Output of `T5LayerFF` 2. Output of `T5LayerSelfAttention` 3. Output of `T5LayerCrossAttention` To avoid these `inf` activations this PR clamps the `hidden_states` after above 3 outputs
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9487/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/9487/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9487", "html_url": "https://github.com/huggingface/transformers/pull/9487", "diff_url": "https://github.com/huggingface/transformers/pull/9487.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9487.patch", "merged_at": 1610451753000 }
https://api.github.com/repos/huggingface/transformers/issues/9486
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9486/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9486/comments
https://api.github.com/repos/huggingface/transformers/issues/9486/events
https://github.com/huggingface/transformers/pull/9486
782,291,058
MDExOlB1bGxSZXF1ZXN0NTUxODcyOTky
9,486
Update run_glue for do_predict with local test data (#9442)
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users/forest1988/followers", "following_url": "https://api.github.com/users/forest1988/following{/other_user}", "gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forest1988/subscriptions", "organizations_url": "https://api.github.com/users/forest1988/orgs", "repos_url": "https://api.github.com/users/forest1988/repos", "events_url": "https://api.github.com/users/forest1988/events{/privacy}", "received_events_url": "https://api.github.com/users/forest1988/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Error messages of the CircleCI are:\r\n\r\n```\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n=========================== short test summary info ============================\r\nFAILED tests/test_pipelines_conversational.py::SimpleConversationPipelineTests::test_history_cache\r\nFAILED tests/test_pipelines_conversational.py::SimpleConversationPipelineTests::test_integration_torch_conversation\r\n==== 2 failed, 4207 passed, 1744 skipped, 734 warnings in 190.84s (0:03:10) ====\r\n```\r\n\r\n```\r\nFAILED tests/test_pipelines_conversational.py::SimpleConversationPipelineTests::test_history_cache\r\n==== 1 failed, 4178 passed, 1774 skipped, 735 warnings in 260.31s (0:04:20) ====\r\n```\r\n\r\nI'm sorry but I'd like to ask you if `run_glue.py` is related to the conversation pipeline.\r\n", "@sgugger @LysandreJik \r\nThank you for reviewing and merging!" ]
1,610
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? Currently, run_glue.py cannot use the test set (`do_predict`) unless we give it a GLUE task name. This PR will allow us to use the local test dataset. As commented in #9442, I tried to achieve the functionality with only simple changes. - It still works with only the local train and valid files (in other words, this PR does not break the current operation.). - If we add `--do_predict` with out adding specific params, we will get an error statement saying that we need either the GLUE task name or the path of the local test file. Fixes #9442 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Thank you for your kind comments on the issue. I have tried to keep it simple and hope there is no problem as an example script.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9486/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9486", "html_url": "https://github.com/huggingface/transformers/pull/9486", "diff_url": "https://github.com/huggingface/transformers/pull/9486.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9486.patch", "merged_at": 1610542116000 }
https://api.github.com/repos/huggingface/transformers/issues/9485
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9485/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9485/comments
https://api.github.com/repos/huggingface/transformers/issues/9485/events
https://github.com/huggingface/transformers/issues/9485
782,269,141
MDU6SXNzdWU3ODIyNjkxNDE=
9,485
ProphetNetNgramAttention: Number of attention heads
{ "login": "guillaume-be", "id": 27071604, "node_id": "MDQ6VXNlcjI3MDcxNjA0", "avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guillaume-be", "html_url": "https://github.com/guillaume-be", "followers_url": "https://api.github.com/users/guillaume-be/followers", "following_url": "https://api.github.com/users/guillaume-be/following{/other_user}", "gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}", "starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions", "organizations_url": "https://api.github.com/users/guillaume-be/orgs", "repos_url": "https://api.github.com/users/guillaume-be/repos", "events_url": "https://api.github.com/users/guillaume-be/events{/privacy}", "received_events_url": "https://api.github.com/users/guillaume-be/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @guillaume-be,\r\n\r\nyou're 100% correct about both the naming: We should remove one `ProphetNet` and also about the config. Thanks a lot for reporting this. I'll open a PR " ]
1,610
1,610
1,610
CONTRIBUTOR
null
## Information Model I am using (Bert, XLNet ...): ProphetNet The ProphetNet Ngram attention layer seem to refer to a wrong number of heads. The `ProphetNetNgramProphetNetSelfAttention` (seems to be a typo by the way, maybe `ProphetNetNgramSelfAttention` would be more appropriate?) is part of the decoder, and therefore I would expect it should contain a number of attention head equal to the configuration parameter `num_decoder_attention_heads `. However, when instantiated at https://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/prophetnet/modeling_prophetnet.py#L759, it uses the property `num_attention_heads` that equals to the number of **encoder** attention heads. I assume that the correct value should be `config.num_decoder_attention_heads`. (Luckily?) no issue showed up in most models because pretrained models have the same number of encoder and decoder heads. Looking at the reference implementation, it does seem that the **decoder** number of attention heads is used for the Ngram attention (see https://github.com/microsoft/ProphetNet/blob/1d36bc5c4f334b0ed9b90fdf3a64785c174f5c45/GLGE_baselines/script/prophetnet/ngram_s2s_model.py#L586) ### Who can help @patrickvonplaten ? Again not sure since I do not see an owner for ProphetNet I would be happy to submit a small PR referencing to `config.num_decoder_attention_heads` rather than this property if you agree with this change Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9485/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9485/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9484
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9484/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9484/comments
https://api.github.com/repos/huggingface/transformers/issues/9484/events
https://github.com/huggingface/transformers/pull/9484
782,230,397
MDExOlB1bGxSZXF1ZXN0NTUxODIzNDAw
9,484
[Flax] Adapt Flax models to new structure
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2934977194, "node_id": "MDU6TGFiZWwyOTM0OTc3MTk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Flax", "name": "Flax", "color": "4862AD", "default": false, "description": "" } ]
closed
false
null
[]
[ "Will wait until https://github.com/huggingface/transformers/pull/10775 is merged, then rebase and then merge.", "@patrickvonplaten \r\n\r\nI like the new structure but it seems this PR broke the flax example: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_flax.py\r\n\r\n- This line (https://github.com/huggingface/transformers/blob/896d7be97401a85dc0ffc5460afd707e8e092781/examples/language-modeling/run_mlm_flax.py#L577) will raise the error\r\n```\r\nTypeError: __init__() got an unexpected keyword argument 'dropout_rate'\r\n```\r\n\r\n- In addition, this line https://github.com/huggingface/transformers/blob/896d7be97401a85dc0ffc5460afd707e8e092781/src/transformers/models/bert/modeling_flax_bert.py#L254 uses an undefined variable `self.dropout_rate`.\r\n\r\nI think we should make more test cases and make sure the examples are runnable. \r\n", "I am very interested in the jax/flax integration. Could you also take a look at my PR? https://github.com/huggingface/transformers/pull/10796\r\nIf you are collaborative and welcome contributions from me, I can contribute more and improve the flax examples." ]
1,610
1,619
1,616
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> As discussed in https://github.com/huggingface/transformers/pull/9172, Flax model should get a design that is most similar to PyTorch and thus should use `def setup(...)` instead of `nn.compact(...)`. This PR refactors the model architecture of Bert & Roberta accordingly. The next step is now to add a general conversion method flax<>pytorch which might require some more follow-up changes to the naming of the weights. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9484/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9484/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9484", "html_url": "https://github.com/huggingface/transformers/pull/9484", "diff_url": "https://github.com/huggingface/transformers/pull/9484.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9484.patch", "merged_at": 1616049858000 }
https://api.github.com/repos/huggingface/transformers/issues/9483
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9483/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9483/comments
https://api.github.com/repos/huggingface/transformers/issues/9483/events
https://github.com/huggingface/transformers/pull/9483
782,137,360
MDExOlB1bGxSZXF1ZXN0NTUxNzQ4NTM0
9,483
Fixing tests. It seems master changed something in the warnings.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I don't understand this: `\"60 started to trigger a new warning, saying that the input_ids length was longer than model max_length.\"`", "Would be nice to find what triggered this to be sure we didn't introduce a bug no?", "@patrickvonplaten I think we're good.\r\n\r\nIt's this commit 79bbcc5260c3acde3e7156966ba836afcbfd8808 that triggered the extra warning." ]
1,610
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? Trying to keep warning tests for now. Should be discarded if it becomes too hard to maintain. 60 started to trigger a new warning, saying that the input_ids length was longer than model max_length. I'm not really sure which commit triggered this, but it did not occur in the original PR <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9483/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9483", "html_url": "https://github.com/huggingface/transformers/pull/9483", "diff_url": "https://github.com/huggingface/transformers/pull/9483.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9483.patch", "merged_at": 1610287701000 }
https://api.github.com/repos/huggingface/transformers/issues/9482
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9482/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9482/comments
https://api.github.com/repos/huggingface/transformers/issues/9482/events
https://github.com/huggingface/transformers/pull/9482
782,114,003
MDExOlB1bGxSZXF1ZXN0NTUxNzI5NDA0
9,482
Reformat the TF serving outputs
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Merged PR to unblock TF Bart - Split PR. However merge to master made tf templates test fail, see: https://github.com/huggingface/transformers/runs/1676878391 . @jplu, I think they need some updating." ]
1,610
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? This PR properly reformat the `serving_output` methods.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9482/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9482", "html_url": "https://github.com/huggingface/transformers/pull/9482", "diff_url": "https://github.com/huggingface/transformers/pull/9482.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9482.patch", "merged_at": 1610287816000 }
https://api.github.com/repos/huggingface/transformers/issues/9481
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9481/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9481/comments
https://api.github.com/repos/huggingface/transformers/issues/9481/events
https://github.com/huggingface/transformers/issues/9481
782,059,299
MDU6SXNzdWU3ODIwNTkyOTk=
9,481
dataset not being sent to device when using Trainer (distributed)
{ "login": "KennethEnevoldsen", "id": 23721977, "node_id": "MDQ6VXNlcjIzNzIxOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KennethEnevoldsen", "html_url": "https://github.com/KennethEnevoldsen", "followers_url": "https://api.github.com/users/KennethEnevoldsen/followers", "following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}", "gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions", "organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs", "repos_url": "https://api.github.com/users/KennethEnevoldsen/repos", "events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}", "received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The `model_parallel` argument has nothing to do with training in a parallel fashion (and is going to be deleted very soon since you're not the first user its name confuses). To use parallel training with:\r\n- PyTorch DataParallel, there is nothing to do, the Trainer does it automatically\r\n- PyTorch DistributedDataParallel, you should launch your script with the `python -m torch.distributed.launch` command (see the examples).", "Glad to hear about the name.\r\n\r\nI was aware that DataParallel was used by default.\r\n\r\nBy examples you must refer to: https://huggingface.co/transformers/examples.html\r\nbut it doesn't seem to provide any information on `torch.distributed.launch` am I missing something?\r\n\r\nwill try to run with flag, but would love if I could find some documentation on this\r\n\r\nThanks for the quick response\r\n", "you can find the commands/docs to launch distributed training in the examples [readme](https://github.com/huggingface/transformers/tree/master/examples#distributed-training-and-mixed-precision)", "Thanks, I will try this out as soon as our GPU's are available again!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,618
1,618
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Linux-4.15.0-99-generic-x86_64-with-glibc2.27 - Python version: 3.8.0 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): 2.4.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: distrubuted ### Who can help Trainer: @sgugger Text Generation /t5: @patrickvonplaten examples/seq2seq: @patil-suraj ## Information The intention is to train a t5 model (preferably as last as possible) in a distributed setting using the HF Trainer. However, when setting the model_parallel to True the training breaks. related issues might be: https://github.com/huggingface/transformers/issues/9229 https://github.com/huggingface/transformers/issues/6821 However, do note that the script works perfectly fine training on multiple GPU in a non distributed fashion (setting model_parallel to False). ## To reproduce I have created a minimal script for reproducing the behavior: ``` import transformers from transformers import ( MT5ForConditionalGeneration, MT5Model, Trainer, TrainingArguments, T5Tokenizer ) ``` <details> <summary> Click to see (minimal) dataset creation </summary> ``` import datasets # making minimal test for example sake def make_test_dataset(tokenizer="google/mt5-small"): if isinstance(tokenizer, str): tokenizer = T5Tokenizer.from_pretrained(tokenizer) ds = datasets.load_dataset("dane") def __tokenizer_input(batch): return tokenizer(batch['text'], padding="max_length", max_length=256, # actual max is 235 truncation=True) def __tokenizer_output(batch): tok = tokenizer(batch['text'], padding="max_length", max_length=256, truncation=True) tok["labels"] = tok.pop("input_ids") return tok # filter out empty strings (bug reported and fixed) ds = ds.filter(lambda batch: bool(batch["text"])) # tokenize both datasets (eos: </s> is added by tokenizer) ds = ds.map(__tokenizer_input, batched=True, batch_size=len(ds)) ds = ds.map(__tokenizer_output, batched=True, batch_size=len(ds)) return ds dataset = make_test_dataset() dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels']) ``` </details> ``` model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small") #using a small model for example training_args = TrainingArguments( output_dir='./results', num_train_epochs=1, logging_dir='./logs', evaluation_strategy="epoch", model_parallel=True # work fine when set to False ) trainer = Trainer( model=model, args=training_args, train_dataset=dataset["train"], eval_dataset=dataset["test"] ) trainer.remove_callback(transformers.integrations.WandbCallback) # removing wandb for conv. trainer.train() ``` Results: ``` RuntimeError: Input, output and indices must be on the current device ``` <details> <summary> Click to see full traceback </summary> ``` RuntimeError Traceback (most recent call last) ~/github/EDP-Efficient-Danish-Preprocessing/tmp.py in 64 65 trainer.remove_callback(transformers.integrations.WandbCallback) ---> 66 trainer.train() ~/.Envs/EDP/lib/python3.8/site-packages/transformers/trainer.py in train(self, model_path, trial) 797 tr_loss += self.training_step(model, inputs) 798 else: --> 799 tr_loss += self.training_step(model, inputs) 800 self._total_flos += self.floating_point_ops(inputs) 801 ~/.Envs/EDP/lib/python3.8/site-packages/transformers/trainer.py in training_step(self, model, inputs) 1137 loss = self.compute_loss(model, inputs) 1138 else: -> 1139 loss = self.compute_loss(model, inputs) 1140 1141 if self.args.n_gpu > 1: ~/.Envs/EDP/lib/python3.8/site-packages/transformers/trainer.py in compute_loss(self, model, inputs) 1161 Subclass and override for custom behavior. 1162 """ -> 1163 outputs = model(**inputs) 1164 # Save past state if it exists 1165 # TODO: this needs to be fixed and made cleaner later. ~/.Envs/EDP/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/.Envs/EDP/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, head_mask, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1422 if encoder_outputs is None: 1423 # Convert encoder inputs in embeddings if needed -> 1424 encoder_outputs = self.encoder( 1425 input_ids=input_ids, 1426 attention_mask=attention_mask, ~/.Envs/EDP/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/.Envs/EDP/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 858 if inputs_embeds is None: 859 assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings" --> 860 inputs_embeds = self.embed_tokens(input_ids) 861 862 batch_size, seq_length = input_shape ~/.Envs/EDP/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/.Envs/EDP/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input) 122 123 def forward(self, input: Tensor) -> Tensor: --> 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, 126 self.norm_type, self.scale_grad_by_freq, self.sparse) ~/.Envs/EDP/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1850 # remove once script supports set_grad_enabled 1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1853 1854 RuntimeError: Input, output and indices must be on the current device ``` </details> ## Expected behavior A training model PS: I personally found that the model_parallel name was slightly confusing. I assume a more fitting name would be model_distributed (but this is a minor thing) Thanks for great work
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9481/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9480
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9480/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9480/comments
https://api.github.com/repos/huggingface/transformers/issues/9480/events
https://github.com/huggingface/transformers/issues/9480
782,036,850
MDU6SXNzdWU3ODIwMzY4NTA=
9,480
request for run_text_classification.py
{ "login": "max-yue", "id": 13486398, "node_id": "MDQ6VXNlcjEzNDg2Mzk4", "avatar_url": "https://avatars.githubusercontent.com/u/13486398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/max-yue", "html_url": "https://github.com/max-yue", "followers_url": "https://api.github.com/users/max-yue/followers", "following_url": "https://api.github.com/users/max-yue/following{/other_user}", "gists_url": "https://api.github.com/users/max-yue/gists{/gist_id}", "starred_url": "https://api.github.com/users/max-yue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/max-yue/subscriptions", "organizations_url": "https://api.github.com/users/max-yue/orgs", "repos_url": "https://api.github.com/users/max-yue/repos", "events_url": "https://api.github.com/users/max-yue/events{/privacy}", "received_events_url": "https://api.github.com/users/max-yue/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! The `run_glue.py` script does what you're looking for.", "> Hi! The `run_glue.py` script does what you're looking for.\r\n\r\nLet's say I have train.csv, dev.csv and test.csv, how can I do it without modify the code? Thank you for your patience.", "Please use the [forums](https://discuss.huggingface.co/) for questions around the script. Running it with the -h option will give you the list of arguments it accepts, in particular `--train_file` and `--validation_file`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,619
1,619
CONTRIBUTOR
null
# 🚀 Feature request There is a run_tf_text_classification.py file under text_classification examples, but no run_text_classification.py.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9480/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9479
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9479/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9479/comments
https://api.github.com/repos/huggingface/transformers/issues/9479/events
https://github.com/huggingface/transformers/pull/9479
782,031,536
MDExOlB1bGxSZXF1ZXN0NTUxNjYwOTQz
9,479
Makes HfArgumentParser compatible with Python 3.9
{ "login": "Tpt", "id": 458123, "node_id": "MDQ6VXNlcjQ1ODEyMw==", "avatar_url": "https://avatars.githubusercontent.com/u/458123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tpt", "html_url": "https://github.com/Tpt", "followers_url": "https://api.github.com/users/Tpt/followers", "following_url": "https://api.github.com/users/Tpt/following{/other_user}", "gists_url": "https://api.github.com/users/Tpt/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tpt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tpt/subscriptions", "organizations_url": "https://api.github.com/users/Tpt/orgs", "repos_url": "https://api.github.com/users/Tpt/repos", "events_url": "https://api.github.com/users/Tpt/events{/privacy}", "received_events_url": "https://api.github.com/users/Tpt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "LGTM" ]
1,610
1,610
1,610
CONTRIBUTOR
null
Python 3.9 changed the format of the string serialization of `typing.Optional`. For example, `str(typing.Optional[str])` is `typing.Union[str, NoneType]` in python 3.8 and `typing.Optional[str]` in Python 3.9.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9479/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9479/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9479", "html_url": "https://github.com/huggingface/transformers/pull/9479", "diff_url": "https://github.com/huggingface/transformers/pull/9479.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9479.patch", "merged_at": 1610111444000 }
https://api.github.com/repos/huggingface/transformers/issues/9478
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9478/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9478/comments
https://api.github.com/repos/huggingface/transformers/issues/9478/events
https://github.com/huggingface/transformers/pull/9478
782,003,652
MDExOlB1bGxSZXF1ZXN0NTUxNjM3OTI3
9,478
Fix TF s2s models
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten Should I remove the following hack in BART?\r\n```python\r\nif inputs[\"decoder_input_ids\"] is None and inputs[\"input_ids\"] is not None:\r\n inputs[\"decoder_input_ids\"] = shift_tokens_right(\r\n inputs[\"input_ids\"], self.config.pad_token_id, self.config.eos_token_id\r\n )\r\n```", "> In general I really don't like the tf.cond(condition, do_fn_one, do_fn_two) design. I think I understand that it is sometimes necessary, but I really like to keep the usage of this function to a minimum in general. The functional approach is very different to our general library design and make the code much much harder to read. It always creates an abstraction by having to wrap parts of the code into a function with no args, like def attn_mask_from_inp_ids() which is not easy to follow and to me always looks like a hack. In Bart we manage to do this part of the code without the usage of tf.cond and Bart has the same exact logic as LED has there -> so we can make it easier I think.\r\n\r\nI understand your point and agree with you and I share you opinion on this, and unfortunately if you come to control flow (conditions and loops) there are some strict rules that one cannot overcome. `tf.cond` is somehow mandatory for autograph.\r\n\r\nA solution I think that should work would be to force the `layer.call` function to be run in graph mode with `@tf.function` which takes care of making itself the translation of all these conditions and loops. This work in some cases, let's see if it works... Does it sounds a proper solution for you?", "> @patrickvonplaten Should I remove the following hack in BART?\r\n> \r\n> ```python\r\n> if inputs[\"decoder_input_ids\"] is None and inputs[\"input_ids\"] is not None:\r\n> inputs[\"decoder_input_ids\"] = shift_tokens_right(\r\n> inputs[\"input_ids\"], self.config.pad_token_id, self.config.eos_token_id\r\n> )\r\n> ```\r\n\r\nPlease don't - it's needed for some use-cases in Bart and for backward comp", "Actually, one thing I'd like to know more in general about our models in TF is the following: \r\n\r\n\"Can we use normal if-else statements in the forward pass\"? \r\n\r\nI always thought that the answer is:\r\n\r\n\"Yes we can as long as the output type and shape of each case is the same\" \r\n\r\nSo for me statements like:\r\n\r\n```python \r\nif shape_list(input_ids) > n:\r\n attention_mask = torch.zeros(shape_list(input_ids))\r\nelse:\r\n attention_mask = torch.ones(shape_list(input_ids))\r\n```\r\n(this code snippet doesn't exist -> it's just an example)\r\n\r\nare totally fine. Is the assumption correct @jplu ?\r\n\r\nOr can we in general **never** use normal if-else statements in TF's forward pass and have to rely on `tf.cond(....)`? This would really surprise me as we're having tons of if statements everywhere in the TF code...\r\n\r\n", "The general answer is yes, but it has some conditions. If you run this condition in eager mode, it will works by default (you can basically do almost anything in eager mode)\r\n\r\nIf you run this condition in graph mode you have two solution to make it works:\r\n1. Either use `tf.cond`\r\n2. Or to wrap your condition into a function decorated with `tf.function`. This will have to effect to apply the Autograph library over the content of your decorated function. Autograph will automatically converts `if-then` clauses, loops, `break`, `return`, `continue`, and more. You can have more information here https://www.tensorflow.org/guide/function#autograph_transformations", "Now that we all agree on a solution, I will apply it for all the models 👍 ", "Ok, LGTM!! Feel free to merge whenever you feel it^^" ]
1,610
1,611
1,611
CONTRIBUTOR
null
# What does this PR do? This PR aims to fix the Seq2Seq models in order to make them able to be served through TF Serving. The problem is stated by @patrickvonplaten in #9313. The reason why it failed was because we use a model as a layer in the `TFXXXForConditionalGeneration` models. The tracing mechanism of TensorFlow when building a graph calls one by one all the layers for building the graph. In order to know what are the inputs needed by each layer, the tracing mechanism check if a layer has a custom input signature, if not, it takes as default a signature where only the first argument is mandatory. Here stand the problem, the Seq2Seq models needs have two mandatory arguments (`input_ids` and `decoder_input_ids` or `inputs_embeds` and `decoder_inputs_embeds`) and then the tracing fails. The fix to this problem is to manually set the expected input signature of the base model when instantiating it in `__init__`. To be harmonized with the required serving, the same signature is used.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9478/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9478", "html_url": "https://github.com/huggingface/transformers/pull/9478", "diff_url": "https://github.com/huggingface/transformers/pull/9478.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9478.patch", "merged_at": 1611245010000 }
https://api.github.com/repos/huggingface/transformers/issues/9477
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9477/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9477/comments
https://api.github.com/repos/huggingface/transformers/issues/9477/events
https://github.com/huggingface/transformers/pull/9477
781,965,851
MDExOlB1bGxSZXF1ZXN0NTUxNjA2NTE3
9,477
rename "gpu" --> "device"
{ "login": "yuchenlin", "id": 10104354, "node_id": "MDQ6VXNlcjEwMTA0MzU0", "avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuchenlin", "html_url": "https://github.com/yuchenlin", "followers_url": "https://api.github.com/users/yuchenlin/followers", "following_url": "https://api.github.com/users/yuchenlin/following{/other_user}", "gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions", "organizations_url": "https://api.github.com/users/yuchenlin/orgs", "repos_url": "https://api.github.com/users/yuchenlin/repos", "events_url": "https://api.github.com/users/yuchenlin/events{/privacy}", "received_events_url": "https://api.github.com/users/yuchenlin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you run the code quality scripts for the code quality test? `make style && make quality`, after installing the latest code quality versions: `pip install -U .[quality]`", "Looks like there is some styling issue. Could you run `make style` on your branch?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? Rename the arg names from "per_gpu" to "per_device" such that it aligns with the instruction in the readme https://github.com/huggingface/transformers/tree/master/examples/text-classification#xnli
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9477/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9477", "html_url": "https://github.com/huggingface/transformers/pull/9477", "diff_url": "https://github.com/huggingface/transformers/pull/9477.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9477.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/9476
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9476/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9476/comments
https://api.github.com/repos/huggingface/transformers/issues/9476/events
https://github.com/huggingface/transformers/pull/9476
781,956,922
MDExOlB1bGxSZXF1ZXN0NTUxNTk5MTYy
9,476
Improve LayoutLM
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the reviews, I've addressed all comments. There are 2 things remaining:\r\n\r\n- in the code examples, I use both `tokenize()` and `convert_tokens_to_ids` as the bounding boxes (which are at word-level) need to be converted to token-level. Is there a better solution?\r\n```\r\nwords = [\"Hello\", \"world\"]\r\nnormalized_word_boxes = [637, 773, 693, 782], [698, 773, 733, 782]\r\ntokens = []\r\ntoken_boxes = []\r\nfor word, box in zip(words, normalized_word_boxes):\r\n word_tokens = tokenizer.tokenize(word)\r\n tokens.extend(word_tokens)\r\n token_boxes.extend([box] * len(word_tokens))\r\n```\r\n- according to @sgugger the input data on which the integration tests are ran are maybe too long, and black formatting causes them to be flattened vertically. Could you maybe fix this @LysandreJik? ", "I pushed the reformat you asked for @NielsRogge, make sure to pull before doing any more changes!", "Ok thank you, so the only thing remaining is make the code examples more efficient? Is there a way to make the code block (see comment above) better?" ]
1,610
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? - [x] Improve documentation of `LayoutLM`, explaining how people can normalize bounding boxes before passing them to the model, add links to the various datasets on which the model achieves state-of-the-art results, add code examples in the documentation for the various models - [x] Add notebook to the list of community notebooks showcasing how to fine-tune `LayoutLMForTokenClassification` on the [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset (on which the model achieves SOTA results) - [x] Add integration tests, which confirm that the model outputs the same tensors as the original implementation on the same input data - [x] Add `LayoutLMForSequenceClassification`, which makes it possible to fine-tune LayoutLM for document image classification tasks (such as the [RVL-CLIP dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/)), extra tests included. Fixes the following issues: - #9228 - #9097 - #8866 - #8524 ## Who can review? @LysandreJik, @patrickvonplaten, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9476/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9476/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9476", "html_url": "https://github.com/huggingface/transformers/pull/9476", "diff_url": "https://github.com/huggingface/transformers/pull/9476.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9476.patch", "merged_at": 1610461592000 }
https://api.github.com/repos/huggingface/transformers/issues/9475
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9475/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9475/comments
https://api.github.com/repos/huggingface/transformers/issues/9475/events
https://github.com/huggingface/transformers/issues/9475
781,798,899
MDU6SXNzdWU3ODE3OTg4OTk=
9,475
[trainer] fractional epoch
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "The fraction (and float) `'epoch': 0.3333333333333333` comes from here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/1c19b423bf274a465f95725a79819bf82f71329e/src/transformers/trainer.py#L899\r\n\r\n@sgugger - is this by design \r\n\r\nor should it be `ceil`: `'epoch': 1`\r\n```\r\n self.state.epoch = math.ceil( epoch + (step + 1) / steps_in_epoch)\r\n```\r\nor `floor`: `'epoch': 0`\r\n```\r\n self.state.epoch = math.floor( epoch + (step + 1) / steps_in_epoch) \r\n```\r\n\r\n`'epoch': 0.3333333333333333` is telling me it's somewhere in the first epoch but isn't done yet?\r\n\r\nPerhaps it's just fine, it's just very odd to see epoch not being an int.\r\n\r\nThanks.\r\n", "This is not a bug, it's completely normal. See the [dcoumentation](https://huggingface.co/transformers/main_classes/callback.html#trainerstate) of `TrainerState.epoch`.", "Ah, OK. Some rounding then perhaps - `0.3333333333333333` is just too loud. 2 decimals?", "Sure, there is no formatting at all for those results, but we can add some.", "Anything else to format while I'm at it? loss I guess - 4 decimals, right?\r\n\r\n`{'loss': 14.846837043762207, 'learning_rate': 6e-06, 'epoch': 0.3333333333333333} `", "I don't see why not." ]
1,610
1,610
1,610
CONTRIBUTOR
null
Running ``` export BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 100 --n_val 100 --n_test 100 --fp16 --save_steps 1 ``` on master, gives: ``` {'loss': 14.846837043762207, 'learning_rate': 6e-06, 'epoch': 0.3333333333333333} ``` epoch can't be fractional.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9475/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9474
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9474/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9474/comments
https://api.github.com/repos/huggingface/transformers/issues/9474/events
https://github.com/huggingface/transformers/pull/9474
781,628,510
MDExOlB1bGxSZXF1ZXN0NTUxMzE4MDI5
9,474
Fast imports part 3
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
COLLABORATOR
null
# What does this PR do? This is the last PR to make the import of transformers and defer the imports of torch/tensorflow to when is necessary. It does the same work as #9446 but in each itnermediate init, so that ``` from transformers import BertModel ``` only imports torch and not TensorFlow (ans is thus very fast). The templates are adapted to the new init format, so users adding models don't have to worry about this. In passing, I noticed that the `tokenization_utils_base` was importing everything at init, so I deferred imports there to only do them when necessary. There might be a few places like this left, but we can address those later on. Fixes #8733
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9474/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9474", "html_url": "https://github.com/huggingface/transformers/pull/9474", "diff_url": "https://github.com/huggingface/transformers/pull/9474.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9474.patch", "merged_at": 1610109660000 }
https://api.github.com/repos/huggingface/transformers/issues/9473
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9473/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9473/comments
https://api.github.com/repos/huggingface/transformers/issues/9473/events
https://github.com/huggingface/transformers/pull/9473
781,604,780
MDExOlB1bGxSZXF1ZXN0NTUxMjk4NjMz
9,473
[Generation Tests] Small speed-up by just generating two tokens
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Just noticed that generation tests are completely irrelevant for the overall testing time...no generation test takes more than 0.5 seconds and 95 % of the generation tests take less than 0.05 seconds" ]
1,610
1,610
1,610
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> @LysandreJik @sgugger I originally thought that the PR: https://github.com/huggingface/transformers/commit/c89f1bc92e340600bde526b7ff54ad692b4e48c9 made the PyTorch tests much slower, but after checking the time of `run_tests_torch` in 10+ merges to master after and before this commit, I noticed that the PR didn't really affect the testing time of PyTorch. The testing time varies quite a bit, but it seemed on average to be a bit higher after the merged PR, so in this PR I want to reduce the testing time for generation a bit. The generation length is reduced by one which halves the testing time of all generation tests by 30% without any loss in testing coverage / cases. Generating two tokens is enough => the first token can be generated without `past_key_values`, but the second token has to be generated with `past_key_values` if `use_cache` is enabled and all generation steps following this one can only be the same. So we should always test at least two tokens, but don't really need to test more in general generation tests that apply to all models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9473/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9473", "html_url": "https://github.com/huggingface/transformers/pull/9473", "diff_url": "https://github.com/huggingface/transformers/pull/9473.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9473.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/9472
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9472/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9472/comments
https://api.github.com/repos/huggingface/transformers/issues/9472/events
https://github.com/huggingface/transformers/pull/9472
781,554,964
MDExOlB1bGxSZXF1ZXN0NTUxMjU3NzI3
9,472
[Generation] Fix bug for manual decoder_input_ids + warning message
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Also checked that slow tests are passing" ]
1,610
1,610
1,610
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Some improvements on the design of how `decoder_input_ids` are extracted that solve: https://github.com/huggingface/transformers/issues/9400 Also adds a nicer warning to prevent non-understandable errors as shown in: https://github.com/huggingface/transformers/issues/9464 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9472/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9472/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9472", "html_url": "https://github.com/huggingface/transformers/pull/9472", "diff_url": "https://github.com/huggingface/transformers/pull/9472.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9472.patch", "merged_at": 1610103040000 }
https://api.github.com/repos/huggingface/transformers/issues/9471
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9471/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9471/comments
https://api.github.com/repos/huggingface/transformers/issues/9471/events
https://github.com/huggingface/transformers/issues/9471
781,535,246
MDU6SXNzdWU3ODE1MzUyNDY=
9,471
model.generate() has the same speed on CPU and GPU
{ "login": "tomdzh", "id": 50083108, "node_id": "MDQ6VXNlcjUwMDgzMTA4", "avatar_url": "https://avatars.githubusercontent.com/u/50083108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomdzh", "html_url": "https://github.com/tomdzh", "followers_url": "https://api.github.com/users/tomdzh/followers", "following_url": "https://api.github.com/users/tomdzh/following{/other_user}", "gists_url": "https://api.github.com/users/tomdzh/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomdzh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomdzh/subscriptions", "organizations_url": "https://api.github.com/users/tomdzh/orgs", "repos_url": "https://api.github.com/users/tomdzh/repos", "events_url": "https://api.github.com/users/tomdzh/events{/privacy}", "received_events_url": "https://api.github.com/users/tomdzh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Just realize that I used a single input... Issued closed", "Thanks, your post helped me so much!\r\nI'm using BloomModel on AWS Lambda Function but Lambda doesn't support GPU. \r\nSo I write the code like that:\r\ndevice = 'cpu'\r\n#topic variable is already given\r\nprompt = f' About {Topic} is what I think: '\r\ninputs = tokenizer(prompt, return_tensors='pt')\r\n\r\ninputs = inputs['input_ids'].to(device)\r\nmodel = model.to(device)\r\nsample = model.generate(inputs, max_length=100, temperature=0.9, repetition_penalty = 2.0)\r\noutput = tokenizer.decode(sample[0], truncate_before_pattern=[r\"\\n\\n^#\", \"^'''\", \"\\n\\n\\n\"])\r\n" ]
1,610
1,663
1,610
NONE
null
Hi, I find that model.generate() of BART and T5 has roughly the same running speed when running on CPU and GPU. Why doesn't GPU give faster speed? Thanks! ## Environment info - `transformers` version: 4.1.1 - Python version: 3.6 - PyTorch version (GPU?): 1.3.1 - Using GPU in script?: yes ### Who can help TextGeneration: @TevenLeScao Bart: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): BART and T5 ## To reproduce ```python import time from transformers import BartTokenizer, BartForConditionalGeneration device = 'cpu' # change to GPU # device = 'cuda:0' text_to_summarize = "My friends are cool but they eat too many carbs." tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = BartForConditionalGeneration.from_pretrained('facebook/bart-base') inputs = tokenizer(text_to_summarize, return_tensors='pt') inputs = inputs['input_ids'].to(device) model = model.to(device) start = time.time() summary_ids = model.generate(inputs) print("Time spent (s): ", time.time() - start) ``` ## Expected behavior I expected running on GPU should give me much faster speed. But running on GPU gave me roughly the same speed as CPU, both around 0.3s in this case.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9471/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9470
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9470/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9470/comments
https://api.github.com/repos/huggingface/transformers/issues/9470/events
https://github.com/huggingface/transformers/issues/9470
781,526,953
MDU6SXNzdWU3ODE1MjY5NTM=
9,470
max_target length for question answering system
{ "login": "Arij-Aladel", "id": 68355048, "node_id": "MDQ6VXNlcjY4MzU1MDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arij-Aladel", "html_url": "https://github.com/Arij-Aladel", "followers_url": "https://api.github.com/users/Arij-Aladel/followers", "following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}", "gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions", "organizations_url": "https://api.github.com/users/Arij-Aladel/orgs", "repos_url": "https://api.github.com/users/Arij-Aladel/repos", "events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}", "received_events_url": "https://api.github.com/users/Arij-Aladel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This doesn't seem to be related to your length, but rather to this:\r\n\r\n```py\r\ndata['labels'].squeeze(): 60\r\n---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n<ipython-input-12-33cc96af719d> in <module>()\r\n 2 input = tokenizer.decode(data['input_ids'].squeeze(), skip_special_tokens=True)\r\n 3 print(\"data['labels'].squeeze(): \", len(data['labels'].squeeze()))\r\n----> 4 label = tokenizer.decode(data['labels'].squeeze(), skip_special_tokens=True)\r\n 5 print(\"data keys: \", data.keys(),\"\\n\")\r\n 6 lines = textwrap.wrap(\"Query:\\n%s\\n\" % data['question'], width=150)\r\n\r\n5 frames\r\n/usr/local/lib/python3.6/dist-packages/sentencepiece/__init__.py in _func(v, n)\r\n 492 def _func(v, n):\r\n 493 if type(n) is int and (n < 0 or n >= v.piece_size()):\r\n--> 494 raise IndexError('piece id is out of range.')\r\n 495 return func(v, n)\r\n 496 \r\n\r\nIndexError: piece id is out of range.\r\n```\r\n\r\nYour tokenizer doesn't manage to decode your label.", "@LysandreJik yes understand that buy I did not understand what I did wrong first. Then I realized that labels that are -100 should be 0 again.\r\n\r\nAnyway good to figure out late than not doing that" ]
1,610
1,610
1,610
NONE
null
Please could you tell the max target length for question answering systems? I was trying and it does not work if the target length is more than 47 this is the [notebook](https://colab.research.google.com/drive/1JzsuPb68L-G4nsMXu57XkwbmzAVrizSS?usp=sharing)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9470/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9469
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9469/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9469/comments
https://api.github.com/repos/huggingface/transformers/issues/9469/events
https://github.com/huggingface/transformers/issues/9469
781,511,910
MDU6SXNzdWU3ODE1MTE5MTA=
9,469
Cannot Evaluate While Training Using the Trainer
{ "login": "AliOskooeiTR", "id": 60223746, "node_id": "MDQ6VXNlcjYwMjIzNzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/60223746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AliOskooeiTR", "html_url": "https://github.com/AliOskooeiTR", "followers_url": "https://api.github.com/users/AliOskooeiTR/followers", "following_url": "https://api.github.com/users/AliOskooeiTR/following{/other_user}", "gists_url": "https://api.github.com/users/AliOskooeiTR/gists{/gist_id}", "starred_url": "https://api.github.com/users/AliOskooeiTR/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AliOskooeiTR/subscriptions", "organizations_url": "https://api.github.com/users/AliOskooeiTR/orgs", "repos_url": "https://api.github.com/users/AliOskooeiTR/repos", "events_url": "https://api.github.com/users/AliOskooeiTR/events{/privacy}", "received_events_url": "https://api.github.com/users/AliOskooeiTR/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi there, I would like to help you, but the code you are providing is not runable on my side (MLMTrainer, MLMArguments, train_dataset, eval_dataset and rt_model for instance are not defined). Could you please post a complete and short reproducer of the bug?" ]
1,610
1,610
1,610
NONE
null
@sgugger ## Environment info - `transformers` version: 4.0.0 - Platform: AWS Amazon Linux - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.1 - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information Model I am using (Bert, XLNet ...): Third Party Model ( Routing Transformer) The problem arises when using: * [ ] my own modified scripts: I am using a custom Trainer to train the third party model. The training loops runs smoothly without evaluation. But as soon as I try to do evaluation while training, it stops training after a couple of evaluations even though it has not reached the max_steps. I copy my args and trainer below. I understand in previous versions there was a evaluate_during_training flag that some have suggested as a fix until Sep 20. But that flag doesn't seem to exist anymore. Any help or pointers would be appreciated. The tasks I am working on is: * [ ] my own task or dataset: Masked Language Modeling ## To reproduce Steps to reproduce the behavior: ``` custom_args = MLMArguments( output_dir='../models/', mask_ratio=0.2, do_train=True, do_eval=True, max_steps=2000000, save_steps=50, logging_steps=5, per_device_train_batch_size=1, per_device_eval_batch_size=1, logging_dir = '../logs/', evaluation_strategy="steps", prediction_loss_only=True ) checkpoint_callback=TrainerCallback() tb_callback = TensorBoardCallback() custom_trainer = MLMTrainer( rt_model, args=custom_args, train_dataset=train_dataset, eval_dataset=eval_dataset, callbacks=[checkpoint_callback, tb_callback] ) custom_trainer.train() ``` Step | Training Loss | Validation Loss -- | -- | -- 5 | 10.033914 | 10.017973 10 | 10.028201 | 9.935969 and at this point it prints the output and stops training.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9469/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9468
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9468/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9468/comments
https://api.github.com/repos/huggingface/transformers/issues/9468/events
https://github.com/huggingface/transformers/issues/9468
781,502,993
MDU6SXNzdWU3ODE1MDI5OTM=
9,468
Have RAG return generator cross-attentions when output_attentions=True
{ "login": "dblakely", "id": 20539855, "node_id": "MDQ6VXNlcjIwNTM5ODU1", "avatar_url": "https://avatars.githubusercontent.com/u/20539855?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dblakely", "html_url": "https://github.com/dblakely", "followers_url": "https://api.github.com/users/dblakely/followers", "following_url": "https://api.github.com/users/dblakely/following{/other_user}", "gists_url": "https://api.github.com/users/dblakely/gists{/gist_id}", "starred_url": "https://api.github.com/users/dblakely/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dblakely/subscriptions", "organizations_url": "https://api.github.com/users/dblakely/orgs", "repos_url": "https://api.github.com/users/dblakely/repos", "events_url": "https://api.github.com/users/dblakely/events{/privacy}", "received_events_url": "https://api.github.com/users/dblakely/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten, @lhoestq Any feedback on this? ", "Feel free to open a PR indeed :) \r\nWhat do you think @patrickvonplaten ? I guess it can be part of the RetrievAugLMMarginOutput attributes.", "Hey @dblakely,\r\n\r\nIt would be great if you could open a PR. Both Bart and T5 already return the `cross_attentions`, so it should be a pretty easy change by just adding \r\n\r\n```python\r\ngenerator_cross_attentions=gen_outputs.cross_attentions,\r\n```\r\nhere:\r\n\r\nhttps://github.com/huggingface/transformers/blob/fac7cfb16a437a97584f6a14c3856b2e06bf0eaa/src/transformers/models/rag/modeling_rag.py#L657\r\n\r\nand then adding `generator_cross_attentions` to all output classes as suggested by @lhoestq " ]
1,610
1,611
1,611
CONTRIBUTOR
null
# 🚀 Have RAG return generator cross-attentions when output_attentions=True This feature request is for the RAG code to be modified so that if `output_attentions=True`, it returns the generator's cross-attentions in addition to the attentions it already returns. ## Motivation I'm interested in extracting the generator's attentions from a RAG generator model. Currently, `transformers` allows you to extract the generator's encoder attentions and decoder attentions, but not it's cross attentions. For example, inside `modeling_rag.py`, the return objects such as [RetrievAugLMMarginOutput](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/modeling_rag.py#L38), have fields for these other attentions, but not the cross-attentions. Because both T5 and BART can output cross-attentions, I think they could simply propagate up through the RAG code. Is there a reason this isn't already the case? Or could I do a PR to include the cross attentions along with the other attentions in the model output? ## Your contribution On my own fork of `transformers`, I've already added this feature and would happily submit a PR!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9468/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9467
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9467/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9467/comments
https://api.github.com/repos/huggingface/transformers/issues/9467/events
https://github.com/huggingface/transformers/issues/9467
781,466,458
MDU6SXNzdWU3ODE0NjY0NTg=
9,467
Unable to train sequence classification task using TFTrainer
{ "login": "parambharat", "id": 12809212, "node_id": "MDQ6VXNlcjEyODA5MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/12809212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parambharat", "html_url": "https://github.com/parambharat", "followers_url": "https://api.github.com/users/parambharat/followers", "following_url": "https://api.github.com/users/parambharat/following{/other_user}", "gists_url": "https://api.github.com/users/parambharat/gists{/gist_id}", "starred_url": "https://api.github.com/users/parambharat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/parambharat/subscriptions", "organizations_url": "https://api.github.com/users/parambharat/orgs", "repos_url": "https://api.github.com/users/parambharat/repos", "events_url": "https://api.github.com/users/parambharat/events{/privacy}", "received_events_url": "https://api.github.com/users/parambharat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello!\r\n\r\nThis is because you do not instantiate your model in the created strategy. You already have an example on how to train such models in the [repo](https://github.com/huggingface/transformers/tree/master/examples/text-classification)", "Hi @jplu,\r\nAre you referring to this: https://github.com/huggingface/transformers/blob/f33a6f34461fea61b579a7ec732fcd174b2b41cd/examples/text-classification/run_tf_text_classification.py#L263\r\ni.e. do i just need to wrap `load_model` in the above code in the context manager `with training_args.strategy.scope():` resulting in \r\n```\r\n# train model\r\ndef train_model(\r\n model_args, data_dir, model_dir, logs_dir, batch_size=32, num_epochs=10\r\n):\r\n training_args = TFTrainingArguments(\r\n output_dir=model_dir,\r\n num_train_epochs=num_epochs,\r\n do_train=True,\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size * 2,\r\n evaluation_strategy=\"steps\",\r\n warmup_steps=500,\r\n weight_decay=0.01,\r\n logging_dir=logs_dir,\r\n dataloader_num_workers=15,\r\n )\r\n datasets = {\r\n \"train\": load_dataset(data_dir=data_dir, split=\"train\", batch_size=batch_size),\r\n \"val\": load_dataset(\r\n data_dir=data_dir, split=\"validation\", batch_size=batch_size\r\n ),\r\n }\r\n \r\n with training_args.strategy.scope():\r\n model = load_model(**model_args)\r\n\r\n trainer = TFTrainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=datasets[\"train\"],\r\n eval_dataset=datasets[\"val\"],\r\n compute_metrics=compute_metrics,\r\n )\r\n trainer.train()\r\n return trainer\r\n```\r\n\r\nIf not could you point me to the right place?", "Yes this is what I meant :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,619
1,619
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Linux-4.15.0-123-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.4.0 (False) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> Trainer: @sgugger Tensorflow: @jplu ## Information Model I am using (Bert, XLNet ...): distilbert-base-cased The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The following is how I load the model and the trainer ```{python} from sklearn.metrics import accuracy_score, precision_recall_fscore_support from datasets import load_from_disk import tensorflow as tf from transformers import TFAutoModelForSequenceClassification, AutoTokenizer from transformers import TFTrainingArguments, TFTrainer # Load dataset def load_dataset(data_dir, split="train", batch_size=32, shuffle=100): dataset = load_from_disk(data_dir)[split] label_type = tf.int32 input_names = ["input_ids", "attention_mask", "token_type_ids"] def gen(): for ex in dataset: d = {k: v for k, v in ex.items() if v is not None} label = d.pop("tag") yield (d, label) tf_dataset = tf.data.Dataset.from_generator( gen, ({k: tf.int32 for k in input_names}, label_type), ({k: tf.TensorShape([None]) for k in input_names}, tf.TensorShape([])), ) tf_dataset = tf_dataset.apply(tf.data.experimental.assert_cardinality(len(dataset))) return tf_dataset # Load the model def load_model(name="distilbert-base-cased", num_labels=11, learning_rate=3e-5): tokenizer = AutoTokenizer.from_pretrained(name) tokenizer.add_special_tokens({"bos_token": "<s>", "eos_token": "</s>"}) model = TFAutoModelForSequenceClassification.from_pretrained( name, num_labels=num_labels ) model.resize_token_embeddings(len(tokenizer)) optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer=optimizer, loss=loss) model.summary() return model #metrics def compute_metrics(pred): labels = pred.label_ids predictions = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support( labels, predictions, average="weighted" ) acc = accuracy_score(labels, predictions) return {"accuracy": acc, "f1": f1, "precision": precision, "recall": recall} # train model def train_model( model_args, data_dir, model_dir, logs_dir, batch_size=32, num_epochs=10 ): training_args = TFTrainingArguments( output_dir=model_dir, num_train_epochs=num_epochs, do_train=True, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size * 2, evaluation_strategy="steps", warmup_steps=500, weight_decay=0.01, logging_dir=logs_dir, dataloader_num_workers=15, ) datasets = { "train": load_dataset(data_dir=data_dir, split="train", batch_size=batch_size), "val": load_dataset( data_dir=data_dir, split="validation", batch_size=batch_size ), } model = load_model(**model_args) trainer = TFTrainer( model=model, args=training_args, train_dataset=datasets["train"], eval_dataset=datasets["val"], compute_metrics=compute_metrics, ) trainer.train() return trainer ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) A multi-class classification task to classify a sentence into one of 11 known categories ## To reproduce Steps to reproduce the behavior: 1. A classification task with more the 2 categories - i.e. num_labels > 2. 2. Use pretrained distill bert model for sequence classification 3. Load the dataset and finetune the model with TFTrainer. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Here is the stack trace from the error. ``` ValueError: in user code: /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/transformers/trainer_tf.py:678 distributed_training_steps * self.args.strategy.run(self.apply_gradients, inputs) /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/transformers/trainer_tf.py:641 apply_gradients * self.optimizer.apply_gradients(list(zip(gradients, self.model.trainable_variables))) /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/transformers/optimization_tf.py:232 apply_gradients * return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name, **kwargs) /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:604 apply_gradients ** self._create_all_weights(var_list) /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:783 _create_all_weights self._create_slots(var_list) /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/adam.py:127 _create_slots self.add_slot(var, 'm') /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:844 add_slot .format(strategy, var)) ValueError: Trying to create optimizer slot variable under the scope for tf.distribute.Strategy (<tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x7efd64765bd0>), which is different from the scope used for the original variable (<tf.Variable 'tf_distil_bert_for_sequence_classification/distilbert/embeddings/tf_distil_bert_for_sequence_classification/distilbert/embeddings/word_embeddings/weight:0' shape=(28998, 768) dtype=float32, numpy= array([[-0.02513016, -0.03304445, -0.00243959, ..., -0.01084836, -0.04682418, -0.00948554], [-0.00482445, -0.02148623, -0.00871447, ..., -0.02602929, -0.03786189, -0.02410287], [-0.01653061, -0.01786226, 0.00105964, ..., -0.01637051, -0.03567044, -0.03141942], ..., [ 0.01190545, -0.02329331, -0.02250608, ..., -0.02713599, -0.04355597, 0.00010529], [ 0.00688736, 0.02267248, 0.02263871, ..., -0.00735895, -0.00814128, 0.00426289], [ 0.00320692, -0.0061747 , 0.01624888, ..., 0.00641411, 0.00060032, 0.01258053]], dtype=float32)>). Make sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The same model trains successfully when trained as a tf.keras model with a batched tfdataset. ``` # modified training code to use the keras model instance that trains the model successfully. def train_model( model_args, data_dir, model_dir, logs_dir, batch_size=32, num_epochs=10 ): training_args = TFTrainingArguments( output_dir=model_dir, num_train_epochs=num_epochs, do_train=True, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size * 2, evaluation_strategy="steps", warmup_steps=500, weight_decay=0.01, logging_dir=logs_dir, dataloader_num_workers=15, ) datasets = { "train": load_dataset(data_dir=data_dir, split="train", batch_size=batch_size), "val": load_dataset( data_dir=data_dir, split="validation", batch_size=batch_size ), } model = load_model(**model_args) history = model.fit(datasets["train"].batch(32), verbose=1) return history ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9467/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9467/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9466
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9466/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9466/comments
https://api.github.com/repos/huggingface/transformers/issues/9466/events
https://github.com/huggingface/transformers/issues/9466
781,462,102
MDU6SXNzdWU3ODE0NjIxMDI=
9,466
RuntimeError when running Reformer model
{ "login": "albusdemens", "id": 276459, "node_id": "MDQ6VXNlcjI3NjQ1OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/276459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albusdemens", "html_url": "https://github.com/albusdemens", "followers_url": "https://api.github.com/users/albusdemens/followers", "following_url": "https://api.github.com/users/albusdemens/following{/other_user}", "gists_url": "https://api.github.com/users/albusdemens/gists{/gist_id}", "starred_url": "https://api.github.com/users/albusdemens/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albusdemens/subscriptions", "organizations_url": "https://api.github.com/users/albusdemens/orgs", "repos_url": "https://api.github.com/users/albusdemens/repos", "events_url": "https://api.github.com/users/albusdemens/events{/privacy}", "received_events_url": "https://api.github.com/users/albusdemens/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @albusdemens, could you maybe update your transformers version to 4.0.0?", "Thanks @patrickvonplaten, that fixed the issue! " ]
1,610
1,610
1,610
NONE
null
## Environment info - `transformers` version: 2.10.0 - Platform: Linux-5.4.0-1034-aws-x86_64-with-debian-buster-sid - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help --> @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Reformer The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run the example code from [here](https://huggingface.co/google/reformer-crime-and-punishment?text=My+name+is+Julien+and+I+like+to): ``` model = ReformerModelWithLMHead.from_pretrained("google/reformer-crime-and-punishment") tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment") tok.decode(model.generate(tok.encode("A few months later", return_tensors="pt"), do_sample=True,temperature=0.7, max_length=100)[0]) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior RuntimeError: Overflow when unpacking long (more details below) ``` <ipython-input-33-0a824540b4e0> in <module> 2 tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment") 3 tok.decode(model.generate(tok.encode("A few months later", return_tensors="pt"), ----> 4 do_sample=True,temperature=0.7)[0]) ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 13 def decorate_context(*args, **kwargs): 14 with self: ---> 15 return func(*args, **kwargs) 16 return decorate_context 17 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, **model_specific_kwargs) 1179 attention_mask=attention_mask, 1180 use_cache=use_cache, -> 1181 model_specific_kwargs=model_specific_kwargs, 1182 ) 1183 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_utils.py in _generate_no_beam_search(self, input_ids, cur_len, max_length, min_length, do_sample, temperature, top_k, top_p, repetition_penalty, no_repeat_ngram_size, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, decoder_start_token_id, batch_size, encoder_outputs, attention_mask, use_cache, model_specific_kwargs) 1221 ) 1222 -> 1223 outputs = self(**model_inputs) 1224 next_token_logits = outputs[0][:, -1, :] 1225 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in forward(self, input_ids, position_ids, attention_mask, head_mask, inputs_embeds, num_hashes, labels, do_output_hidden_states, do_output_attentions) 1738 num_hashes=num_hashes, 1739 do_output_hidden_states=do_output_hidden_states, -> 1740 do_output_attentions=do_output_attentions, 1741 ) 1742 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in forward(self, input_ids, attention_mask, position_ids, head_mask, inputs_embeds, num_hashes, do_output_hidden_states, do_output_attentions) 1588 num_hashes=num_hashes, 1589 do_output_hidden_states=do_output_hidden_states, -> 1590 do_output_attentions=do_output_attentions, 1591 ) 1592 sequence_output = encoder_outputs.hidden_states ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in forward(self, hidden_states, attention_mask, head_mask, num_hashes, do_output_hidden_states, do_output_attentions) 1324 all_attentions, 1325 do_output_hidden_states, -> 1326 do_output_attentions, 1327 ) 1328 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in forward(ctx, hidden_states, layers, attention_mask, head_mask, num_hashes, all_hidden_states, all_attentions, do_output_hidden_states, do_output_attentions) 1220 head_mask=layer_head_mask, 1221 num_hashes=num_hashes, -> 1222 do_output_attentions=do_output_attentions, 1223 ) 1224 attn_output = layer_outputs.attn_output ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in forward(***failed resolving arguments***) 1111 # for dropout and save seed for forward fn in backward 1112 # to have correct dropout -> 1113 self._init_feed_forward_seed() 1114 # Y_2 = X_2 + g(Y_1) 1115 hidden_states = hidden_states + self.feed_forward(attn_output) ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in _init_feed_forward_seed(self) 1075 else: 1076 # CPU -> 1077 self.feed_forward_seed = int(torch.seed() % sys.maxsize) 1078 torch.manual_seed(self.feed_forward_seed) 1079 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/random.py in seed() 43 44 if not torch.cuda._is_in_bad_fork(): ---> 45 torch.cuda.manual_seed_all(seed) 46 47 return seed ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/cuda/random.py in manual_seed_all(seed) 111 default_generator.manual_seed(seed) 112 --> 113 _lazy_call(cb) 114 115 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/cuda/__init__.py in _lazy_call(callable) 133 def _lazy_call(callable): 134 if is_initialized(): --> 135 callable() 136 else: 137 # Don't store the actual traceback to avoid memory cycle ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/cuda/random.py in cb() 109 for i in range(device_count()): 110 default_generator = torch.cuda.default_generators[i] --> 111 default_generator.manual_seed(seed) 112 113 _lazy_call(cb) RuntimeError: Overflow when unpacking long ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9466/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9465
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9465/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9465/comments
https://api.github.com/repos/huggingface/transformers/issues/9465/events
https://github.com/huggingface/transformers/pull/9465
781,422,721
MDExOlB1bGxSZXF1ZXN0NTUxMTQ4OTM4
9,465
[README] Add new models
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds LED and BlenderbotSmall to the Readme. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9465/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9465", "html_url": "https://github.com/huggingface/transformers/pull/9465", "diff_url": "https://github.com/huggingface/transformers/pull/9465.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9465.patch", "merged_at": 1610102984000 }
https://api.github.com/repos/huggingface/transformers/issues/9464
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9464/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9464/comments
https://api.github.com/repos/huggingface/transformers/issues/9464/events
https://github.com/huggingface/transformers/issues/9464
781,388,851
MDU6SXNzdWU3ODEzODg4NTE=
9,464
UnboundLocalError when generating sequences
{ "login": "miguelvictor", "id": 6831138, "node_id": "MDQ6VXNlcjY4MzExMzg=", "avatar_url": "https://avatars.githubusercontent.com/u/6831138?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miguelvictor", "html_url": "https://github.com/miguelvictor", "followers_url": "https://api.github.com/users/miguelvictor/followers", "following_url": "https://api.github.com/users/miguelvictor/following{/other_user}", "gists_url": "https://api.github.com/users/miguelvictor/gists{/gist_id}", "starred_url": "https://api.github.com/users/miguelvictor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miguelvictor/subscriptions", "organizations_url": "https://api.github.com/users/miguelvictor/orgs", "repos_url": "https://api.github.com/users/miguelvictor/repos", "events_url": "https://api.github.com/users/miguelvictor/events{/privacy}", "received_events_url": "https://api.github.com/users/miguelvictor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @miguelvictor, the problem is that `max_length` is set to a value that is too small. You need to increase either,\r\n\r\n```model.config.max_length``` or pass a `max_length` parameter to `generate()` that is longer than your input_ids.", "Ohh... my bad. Thank you!" ]
1,610
1,610
1,610
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.2.0dev0 - Platform: macOS-11.1-x86_64-i386-64bit - Python version: 3.8.7 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten ## Information Model I am using GPT2LMHeadModel: The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) Generate sequences using with the following snippet: ```python model.generate( input_ids, do_sample=False, num_beams=beam_width, num_return_sequences=beam_width, early_stopping=False, output_scores=True, return_dict_in_generate=True, ) ``` Generate The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Try to generate sequences as mentioned above. ### Traceback ``` Traceback (most recent call last): File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 394, in run_asgi result = await app(self.scope, self.receive, self.send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__ return await self.app(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/fastapi/applications.py", line 199, in __call__ await super().__call__(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__ await self.middleware_stack(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__ raise exc from None File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__ await self.app(scope, receive, _send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/middleware/cors.py", line 86, in __call__ await self.simple_response(scope, receive, send, request_headers=headers) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/middleware/cors.py", line 142, in simple_response await self.app(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__ raise exc from None File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__ await route.handle(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle await self.app(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/routing.py", line 41, in app response = await func(request) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/fastapi/routing.py", line 201, in app raw_response = await run_endpoint_function( File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/fastapi/routing.py", line 150, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool return await loop.run_in_executor(None, func, *args) File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "./server.py", line 47, in doSampleGPT2 results = sampleGPT2v2(model=model, tokenizer=tokenizer, sequence=src) File "./ccompletion/samplers/gpt2sampler.py", line 151, in sampleGPT2v2 outputs = model.generate( File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/Users/miguelvictor/Projects/transformers/src/transformers/generation_utils.py", line 943, in generate return self.beam_search( File "/Users/miguelvictor/Projects/transformers/src/transformers/generation_utils.py", line 1655, in beam_search input_ids, beam_scores, next_tokens, next_indices, pad_token_id=pad_token_id, eos_token_id=eos_token_id UnboundLocalError: local variable 'next_tokens' referenced before assignment ``` ## Expected behavior No errors raised.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9464/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9463
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9463/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9463/comments
https://api.github.com/repos/huggingface/transformers/issues/9463/events
https://github.com/huggingface/transformers/issues/9463
781,353,135
MDU6SXNzdWU3ODEzNTMxMzU=
9,463
FileNotFoundError when instantiating RagRetriever
{ "login": "poccio", "id": 3777650, "node_id": "MDQ6VXNlcjM3Nzc2NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/3777650?v=4", "gravatar_id": "", "url": "https://api.github.com/users/poccio", "html_url": "https://github.com/poccio", "followers_url": "https://api.github.com/users/poccio/followers", "following_url": "https://api.github.com/users/poccio/following{/other_user}", "gists_url": "https://api.github.com/users/poccio/gists{/gist_id}", "starred_url": "https://api.github.com/users/poccio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/poccio/subscriptions", "organizations_url": "https://api.github.com/users/poccio/orgs", "repos_url": "https://api.github.com/users/poccio/repos", "events_url": "https://api.github.com/users/poccio/events{/privacy}", "received_events_url": "https://api.github.com/users/poccio/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "Hi ! Thanks for reporting\r\nCan you try again ? I fixed the missing file", "Thanks a lot! I'll try right away (with my internet speed, it should take ~1h30)", "Yup, I can confirm now the issue is resolved. Thanks a lot! Shall I close the issue?", "Hey @poccio,\r\n\r\nusually, always feel free to close issues that you opened. As maintainers, we don't always feel comfortable closing an issue since it's not always clear whether the author's issue is solved. So if it's solved for you, it's great if you close it :-) Thanks for reporting the issue." ]
1,610
1,610
1,610
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.1 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @lhoestq ## Information I am trying to use RAG but I am having issues downloading the compressed index. ## To reproduce ```python from transformers import RagRetriever retriever = RagRetriever.from_pretrained('facebook/rag-sequence-nq', dataset='wiki_dpr', index_name='compressed') ``` Which results in: ```python FileNotFoundError: Couldn't find file at https://storage.googleapis.com/huggingface-nlp/cache/datasets/wiki_dpr/psgs_w100.nq.compressed/0.0.0/psgs_w100.nq.IVF4096_HNSW128_PQ128-IP-train.faiss ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9463/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9462
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9462/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9462/comments
https://api.github.com/repos/huggingface/transformers/issues/9462/events
https://github.com/huggingface/transformers/pull/9462
781,331,387
MDExOlB1bGxSZXF1ZXN0NTUxMDczNzIy
9,462
Fix scatter import
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
MEMBER
null
Scatter is wrongly spelled
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9462/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9462", "html_url": "https://github.com/huggingface/transformers/pull/9462", "diff_url": "https://github.com/huggingface/transformers/pull/9462.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9462.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/9461
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9461/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9461/comments
https://api.github.com/repos/huggingface/transformers/issues/9461/events
https://github.com/huggingface/transformers/issues/9461
781,254,275
MDU6SXNzdWU3ODEyNTQyNzU=
9,461
Error while loading finetuned distilbert model: embedding dimension mismatch
{ "login": "rohanshingade", "id": 18469762, "node_id": "MDQ6VXNlcjE4NDY5NzYy", "avatar_url": "https://avatars.githubusercontent.com/u/18469762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohanshingade", "html_url": "https://github.com/rohanshingade", "followers_url": "https://api.github.com/users/rohanshingade/followers", "following_url": "https://api.github.com/users/rohanshingade/following{/other_user}", "gists_url": "https://api.github.com/users/rohanshingade/gists{/gist_id}", "starred_url": "https://api.github.com/users/rohanshingade/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohanshingade/subscriptions", "organizations_url": "https://api.github.com/users/rohanshingade/orgs", "repos_url": "https://api.github.com/users/rohanshingade/repos", "events_url": "https://api.github.com/users/rohanshingade/events{/privacy}", "received_events_url": "https://api.github.com/users/rohanshingade/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello!\r\n\r\nFirst of all, can you try with the source version in order to see if the problem still occurs.", "Hello @jplu ,\r\nJust running these 4 lines throws error. \r\n\r\n```\r\nmodel = TFDistilBertForSequenceClassification.from_pretrained(\"distilbert-base-multilingual-cased\")\r\nmodel.layers[0].embeddings.trainable = False\r\nmodel.save_pretrained(\"model\")\r\nloaded_model = TFDistilBertForSequenceClassification.from_pretrained(\"model\")\r\n```\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-8-aa416dc4d078> in <module>\r\n----> 1 loaded_model = TFDistilBertForSequenceClassification.from_pretrained(\"model\")\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 614 # see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1339-L1357\r\n 615 try:\r\n--> 616 model.load_weights(resolved_archive_file, by_name=True)\r\n 617 except OSError:\r\n 618 raise OSError(\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in load_weights(self, filepath, by_name, skip_mismatch, options)\r\n 2207 if by_name:\r\n 2208 hdf5_format.load_weights_from_hdf5_group_by_name(\r\n-> 2209 f, self.layers, skip_mismatch=skip_mismatch)\r\n 2210 else:\r\n 2211 hdf5_format.load_weights_from_hdf5_group(f, self.layers)\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py in load_weights_from_hdf5_group_by_name(f, layers, skip_mismatch)\r\n 784 symbolic_weights[i])) +\r\n 785 ', but the saved weight has shape ' +\r\n--> 786 str(weight_values[i].shape) + '.')\r\n 787 \r\n 788 else:\r\n\r\nValueError: Layer #0 (named \"distilbert\"), weight <tf.Variable 'tf_distil_bert_for_sequence_classification_1/distilbert/embeddings/word_embeddings/weight:0' shape=(119547, 768) dtype=float32, numpy=\r\narray([[ 0.00801877, -0.01047559, -0.03101005, ..., 0.02595956,\r\n -0.01114979, 0.0103603 ],\r\n [ 0.00097553, -0.00474179, -0.0065623 , ..., 0.03424093,\r\n -0.0189246 , 0.01545161],\r\n [-0.02869349, -0.03147252, -0.02191292, ..., 0.00606783,\r\n 0.0091517 , 0.00140686],\r\n ...,\r\n [ 0.00324067, 0.01025188, -0.0173355 , ..., 0.00799547,\r\n 0.00298822, -0.00772437],\r\n [ 0.00393043, 0.02751113, 0.00989435, ..., 0.00630352,\r\n -0.01590282, 0.00017761],\r\n [-0.02440546, -0.02454552, 0.01318205, ..., -0.02244014,\r\n 0.02798119, -0.006583 ]], dtype=float32)> has shape (119547, 768), but the saved weight has shape (768, 768).\r\n\r\n```\r\n​\r\n\r\n", "With which transformers version?", "I'm using this docker image `huggingface/transformers-tensorflow-gpu:3.3.1`", "Ok I just tried your code snipped on the source version and it works as expected, so it looks like this issue has already been fixed. Then I suggest you to update your container to the last release.", "Thanks", "I'm getting the same error, with code that hasn't changed since it worked. I guess the model downloaded by from_pretrained() is no longer compatible with older tokenizers or transformers code.\r\n\r\ntokenizers==0.8.1.rc1\r\ntransformers==3.0.2\r\n\r\nBut what to upgrade to so that it works? I'm trying to maintain compatibility with javascript tokenizers 0.6.2 because there are other version issues there on the nodejs side. However it seems this may not be possible.\r\n\r\nJust got same error with:\r\ntokenizers==0.8.1.rc2\r\ntransformers==3.3.1\r\n", "Can't even get working with latest tokenizers and transformers. Although upgrading change the error to:\r\n`ValueError: cannot reshape array of size 22268928 into shape (30522,768)`", "Reproduction script:\r\n```\r\nfrom transformers import DistilBertConfig, TFDistilBertModel\r\nconfig = DistilBertConfig(dropout=0.2, attention_dropout=0.2)\r\nconfig.output_hidden_states = False\r\nprint('loaading')\r\ntransformer_model = TFDistilBertModel.from_pretrained(\r\n \"distilbert-base-cased\", config=config\r\n)\r\nprint('loaded')\r\n```\r\n\r\nPython: 3.8.4\r\n\r\nabsl-py==0.10.0\r\nastunparse==1.6.3\r\ndatasets==1.8.0\r\nfilelock==3.0.12\r\ngast==0.3.3\r\nh5py==2.10.0\r\nkeras==2.4.3\r\nkeras-applications==1.0.8\r\nkeras-preprocessing==1.1.2\r\nnumpy==1.19.2\r\nopt-einsum==3.3.0\r\nprotobuf==3.12.2\r\nregex==2020.7.14\r\nrequests==2.24.0\r\nsacremoses==0.0.43\r\nsentencepiece==0.1.94\r\nsix==1.15.0\r\nscikit-learn==0.24.2\r\nscipy==1.4.1\r\ntensorboard==2.4.0\r\ntensorflow==2.4.0\r\ntermcolor==1.1.0\r\ntokenizers==0.10.3\r\ntqdm==4.48.0\r\ntransformers==4.7.0\r\nwrapt==1.12.1\r\n\r\n\r\n", "Interestingly `distibert-base-uncased` works just not `distilbert-base-cased`. Maybe missing some config?", "+1" ]
1,610
1,625
1,610
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Python version: 3.6.9 - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik @patil-suraj @jplu <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information I am using `TFAutoModelForSequenceClassification` with `distilbert-base-multilingual-cased` model. For finetuning i have freezed the embedding layer. Finetuning is successful and i have saved the weights using `save_pretrained`. However after finetuning when i load the model for inference using `TFAutoModelForSequenceClassification` or `TFDistilBertForSequenceClassification` it throws error. However, I did not face any issues with `TFXLMRobertaForSequenceClassification` and `jplu/tf-xlm-roberta-base` while trying the same thing. ## To reproduce ``` tokenizer = AutoTokenizer.from_pretrained("distilbert-base-multilingual-cased") tokenizer_output = tokenizer.batch_encode_plus(train_texts, max_length=100, padding="max_length", truncation=True,return_attention_mask=True, add_special_tokens=True) input_ids, attention_mask = tokenizer_output["input_ids"], tokenizer_output["attention_mask"] config = AutoConfig.from_pretrained("distilbert-base-multilingual-cased", num_labels=num_classes,label2id=label2id, id2label=id2label,finetuning_task="text-classification") model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-multilingual-cased", config=config) # freezing embedding layers model.layers[0].embeddings.trainable = False loss = tf.keras.losses.CategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.Accuracy() optimizer = tf.keras.optimizers.Adam(learning_rate=2e-6, epsilon=1e-08) model.compile(loss=loss, optimizer=optimizer, metrics=[metric]) model.fit([input_ids, attention_mask], train_labels, epochs=10, batch_size=16) model.save_pretrained("model") #Throws error for both cases #loaded_model = TFAutoModelForSequenceClassification.from_pretrained("model") loaded_model = TFDistilBertForSequenceClassification.from_pretrained("model") ``` Error while loading saved model: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-29-0ad4a4ce38ca> in <module> 2 3 #config = AutoConfig.from_pretrained(os.path.join("/home/Rajat/Rohan/models/xlmr104", "model")) ----> 4 model = TFDistilBertForSequenceClassification.from_pretrained(os.path.join("/home/Rajat/Rohan/models/dbert103", "model")) 5 6 t2 = time.time() /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 614 # see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1339-L1357 615 try: --> 616 model.load_weights(resolved_archive_file, by_name=True) 617 except OSError: 618 raise OSError( /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in load_weights(self, filepath, by_name, skip_mismatch, options) 2207 if by_name: 2208 hdf5_format.load_weights_from_hdf5_group_by_name( -> 2209 f, self.layers, skip_mismatch=skip_mismatch) 2210 else: 2211 hdf5_format.load_weights_from_hdf5_group(f, self.layers) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py in load_weights_from_hdf5_group_by_name(f, layers, skip_mismatch) 784 symbolic_weights[i])) + 785 ', but the saved weight has shape ' + --> 786 str(weight_values[i].shape) + '.') 787 788 else: ValueError: Layer #0 (named "distilbert"), weight <tf.Variable 'tf_distil_bert_for_sequence_classification_9/distilbert/embeddings/word_embeddings/weight:0' shape=(119547, 768) dtype=float32, numpy= array([[ 0.00207364, 0.01255192, 0.01065131, ..., 0.0182375 , -0.01671835, -0.02844721], [ 0.0333954 , 0.03589885, -0.03751937, ..., -0.01915496, -0.00888181, -0.00063128], [ 0.01174717, 0.00945629, -0.01179059, ..., 0.03340805, -0.00715566, -0.02317093], ..., [ 0.01775699, -0.01719745, -0.03220321, ..., 0.00817569, -0.00393617, -0.00730391], [ 0.03056052, -0.00136884, -0.02507194, ..., 0.01245719, -0.00362111, -0.01495665], [ 0.03703629, 0.01664717, -0.01278388, ..., 0.02537051, 0.02492457, 0.01191532]], dtype=float32)> has shape (119547, 768), but the saved weight has shape (768, 768). ``` <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9461/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9460
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9460/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9460/comments
https://api.github.com/repos/huggingface/transformers/issues/9460/events
https://github.com/huggingface/transformers/pull/9460
781,248,338
MDExOlB1bGxSZXF1ZXN0NTUxMDA0NTM5
9,460
[TFGPT2] - Fix flaky past_key_values test
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR attempts to fix the flaky TFGPT2 test: [tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_gpt2_model_past_large_inputs](https://app.circleci.com/pipelines/github/huggingface/transformers/18086/workflows/2c889716-285d-489f-9c1b-03c99155ea37/jobs/145873) To be honest, I'm not really sure what is/was going on there. I don't see an obvious bug in any of the test and `TFGPT2` wasn't changed for a long time -> so not sure what's going on. Also before doing the changes in the PR the test failed 1/20 times in my bash loop. The only change in this PR is to change the batch_size from 13 to 1 as it's done in other TF tests as well (see: https://github.com/huggingface/transformers/blob/a400fe8931cce276df74c7c7a5ee4dd28b5674ec/tests/test_modeling_tf_t5.py#L203). => so I think the test should have passed previously as well (there should be no difference between batch_size 1 and 13 ...) After the change, I ran the test 60 times in a loop and it never failed - we should still keep an eye on it though. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9460/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9460", "html_url": "https://github.com/huggingface/transformers/pull/9460", "diff_url": "https://github.com/huggingface/transformers/pull/9460.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9460.patch", "merged_at": 1610032329000 }
https://api.github.com/repos/huggingface/transformers/issues/9459
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9459/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9459/comments
https://api.github.com/repos/huggingface/transformers/issues/9459/events
https://github.com/huggingface/transformers/pull/9459
781,229,015
MDExOlB1bGxSZXF1ZXN0NTUwOTg4Mzg1
9,459
[LED Test] fix common inputs pt for flaky pt-tf led test
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik @sgugger - should fix flaky TFLED test.", "Thanks!" ]
1,610
1,610
1,610
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes flaky led test: [tests/test_modeling_tf_led.py::TFLEDModelTest::test_pt_tf_model_equivalence](https://app.circleci.com/pipelines/github/huggingface/transformers/18159/workflows/e39565cd-188f-406a-bc8c-3db64a5829c5/jobs/146772/steps). It's the classic bug for Longformer, I forgot to set the global attention mask correctly for the common inputs for PT ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9459/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9459", "html_url": "https://github.com/huggingface/transformers/pull/9459", "diff_url": "https://github.com/huggingface/transformers/pull/9459.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9459.patch", "merged_at": 1610018944000 }
https://api.github.com/repos/huggingface/transformers/issues/9458
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9458/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9458/comments
https://api.github.com/repos/huggingface/transformers/issues/9458/events
https://github.com/huggingface/transformers/issues/9458
781,229,005
MDU6SXNzdWU3ODEyMjkwMDU=
9,458
Closed
{ "login": "cyk1337", "id": 13767887, "node_id": "MDQ6VXNlcjEzNzY3ODg3", "avatar_url": "https://avatars.githubusercontent.com/u/13767887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cyk1337", "html_url": "https://github.com/cyk1337", "followers_url": "https://api.github.com/users/cyk1337/followers", "following_url": "https://api.github.com/users/cyk1337/following{/other_user}", "gists_url": "https://api.github.com/users/cyk1337/gists{/gist_id}", "starred_url": "https://api.github.com/users/cyk1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyk1337/subscriptions", "organizations_url": "https://api.github.com/users/cyk1337/orgs", "repos_url": "https://api.github.com/users/cyk1337/repos", "events_url": "https://api.github.com/users/cyk1337/events{/privacy}", "received_events_url": "https://api.github.com/users/cyk1337/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,610
1,610
1,610
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9458/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9458/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9457
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9457/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9457/comments
https://api.github.com/repos/huggingface/transformers/issues/9457/events
https://github.com/huggingface/transformers/issues/9457
781,212,580
MDU6SXNzdWU3ODEyMTI1ODA=
9,457
[Blenderbot] Model yields weird results
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, I'm actually investigating this, also see #9365", "Any new insights into this issue? ", "Yes, most of the work was done here:\r\n\r\nhttps://github.com/huggingface/transformers/pull/10002\r\nand\r\nhttps://github.com/huggingface/transformers/pull/9984\r\n\r\nIt was mostly linked to something which was not supported by the `generate` function (namely `encoder_no_repeat_n_gram_size`) at the time.\r\n\r\nI've seen a few issues creep up again about blenderbot (namely questioning the separation scheme of conversation items).\r\nI didn't have time to dive more into it again to double check, but at the time of the mentionned PRs, the separation scheme was tripled checked against the `master` branch of ParlAI (the questionning was mentionning the docs, which could always be outdated).\r\n\r\nAlso keep in mind, ParlAI actually uses more scheme to prevent the model from outputting too many odd stuff. There's an hardcoded banned word list + an actual model to detect anything inappropriate (maybe more, what I found was way out of scope for transformers and also extremely specific to Blenderbot). The \"personna\" thing, are usable within transformers, but do rely on tricks. A \"personna\" is actually just a prompt at the start of the conversation looking like \"your personna: You live in a mansion\".\r\nSo prefixing your conversation with \"your persona: You live in a mansion Hi there!\" should yield the same results as Blenderbot.\r\nCheck ParlAI implementations to confirm (I'm not sure about the actual casing used and so on).", "Thanks for the reply @Narsil as well as the links to the related PRs. Yes, I'm aware of ParlAI's implementation of a safety detector. Thanks also for the point about the persona implementation - that is what I assumed but it's great that you've confirmed. \r\n\r\nJust to check, is the separation scheme a total of three spaces between turns? (2 in the join operator plus an extra at the start of each sentence) This is what I see in `tests/test_pipelines_conversational.py` \r\n\r\nIf so, the [documentation](https://huggingface.co/transformers/model_doc/blenderbot.html#tfblenderbotforconditionalgeneration) may be outdated, as it uses `</s> <s>` between turns, which produces different results. ", "Yes, I confirmed that it was 3 spaces. \r\nIt's supposed to be 4 spaces, but if I remember correctly, it was actually 2 + 1 hardcoded. I checked at the token level in the end, and it's 228, 228 all the time.\r\n\r\nFound the persona code, the sentence split was a bit more spread out, I can't find it right away,\r\n\r\nit's somewhere in there https://github.com/facebookresearch/ParlAI/blob/master/parlai/core/torch_generator_agent.py if you want to start inspecting live code.", "> Yes, I confirmed that it was 3 spaces.\r\n> It's supposed to be 4 spaces, but if I remember correctly, it was actually 2 + 1 hardcoded. I checked at the token level in the end, and it's 228, 228 all the time.\r\n> \r\n> Found the persona code, the sentence split was a bit more spread out, I can't find it right away,\r\n> \r\n> it's somewhere in there https://github.com/facebookresearch/ParlAI/blob/master/parlai/core/torch_generator_agent.py if you want to start inspecting live code.\r\n\r\nPerfect, thanks for the reference. I just managed to do some poking around in the ParlAI library and confirmed the delimiter token in the history object. It is also what you found. \r\n\r\n```python\r\nfrom parlai.core.agents import create_agent_from_model_file\r\nblender_agent = create_agent_from_model_file(\"zoo:blender/blender_400Mdistill/model\", {\"skip_generation\": False})\r\n\r\nprint(blender_agent.history.delimiter_tok)\r\n\r\n# Output: [228, 228]\r\n```\r\n\r\nFor persona, looks like they just separate all the persona details with newlines, and bundle it into the first turn. E.g.\r\n\r\nyour persona: I like cheese`\\n`your persona: I am from New York City`[228, 228]`Hi, where are you from`[228, 228]`Hi, I'm from the city of new york city. How about you? Do you like cheese?`[228,228]`do you like cheese?`[228, 228]`Yes, I love cheese. It is one of my favorite foods. What is your favorite food?\r\n\r\nReference: https://github.com/facebookresearch/ParlAI/issues/2872\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,610
1,620
1,620
MEMBER
null
As discussed with @Narsil offline, Blenderbot seems to yield weird generation results. I think we have to dive deeper into the original `Parlai` lib and make sure that there is no flaw in the model or generate function. Also on my Todo list. Also pinging @patil-suraj and @Narsil for notice.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9457/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9456
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9456/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9456/comments
https://api.github.com/repos/huggingface/transformers/issues/9456/events
https://github.com/huggingface/transformers/issues/9456
781,203,207
MDU6SXNzdWU3ODEyMDMyMDc=
9,456
[EncoderDecoder] Make sure `use_cache` is set to `True` for all Bert2Bert, Roberta2Roberta by default
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten \r\n\r\nCan we instead set `use_cache` to `True` by default in `generate`? That way we won't need to rely on `config` \r\n\r\nRight now, the `generate` [docstring](https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L689) says that it defaults to `True`, but it's set to `None`\r\n\r\nhttps://github.com/huggingface/transformers/blob/28d74872cc049e0cbee3fafd15cbbabfe348ebd4/src/transformers/generation_utils.py#L618\r\n", "Hmm, that goes a bit against the philosophy because we never \"set\" any variables in `generate()`. We should do it in `EncoderDecoderConfig` and in `from_encoder_decoder_pretrained`. Note that all args in `generate()` are set to `None`, but default to the respective config defaults which should be set correctly", "Also `use_cache` is newly introduced in bert/roberta config and is `True` by default, so even if the model's config file online doesn't have `use_cache` it should still be `True,` no?\r\n\r\nCould you maybe provide an example where the above issue occurs?", "@patil-suraj, you're 100% right!\r\n\r\nI initially thought it's a problem because `EncoderDecoderConfig` does not have a `use_cache` param set to `True`, but it doesn't actually matter since `model.decoder.config.use_cache` will always be set to `True` by default which forces `use_cache` to be True in the decoder which makes it return the `past_key_values` => so all good then - thanks a lot for double-checking this :-)" ]
1,610
1,610
1,610
MEMBER
null
At the moment when one loads a Bert2Bert: ```python model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "bert-base-cased") ``` does not automatically set `use_cache` to True -> so that the user "silently" has to be fine with a much slower than optimal inference speed. Also all Bert2Bert configs online don't have `use_cache` set to True. This should be changed at least for the heavily used Bert2Bert models. I'll try to take care of that in the next couple days. Also pinging @patil-suraj for information. Thanks @Narsil for binging up the topic.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9456/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9455
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9455/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9455/comments
https://api.github.com/repos/huggingface/transformers/issues/9455/events
https://github.com/huggingface/transformers/issues/9455
781,160,262
MDU6SXNzdWU3ODExNjAyNjI=
9,455
Rename `nlp` variables into more appropriate names
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "Thanks for creating the issue! Here are examples of what good names are in my humble opinion:\r\n```\r\nclassifier = pipeline(\"sentiment-analysis\")\r\nunmasker = pipeline(\"fill-mask\")\r\ntext_generator = pipeline(\"text-generation\")\r\n```\r\nIn short, something that describes the task it achieves.", "Hi guys, I'm new and I'd like to start helping out. Can I take over this request? And to clarify, you're referring to the references primarily in the /tests directory and in the /docs directory?", "Hi @terrenceedmonds , Thanks for taking this on ! \r\n\r\nYes, for both directories, but docs are also found within docstrings within the `src/transformers/pipelines` directory.", "Hii. Is this issue still open? I want to take this issue. Also, this will be my first contribution. Any help in getting me started will be highly appreciated.", "Let's see if @terrenceedmonds wants to to finish it first (the PR was almost ready to merge).", "Can I work on this issue, if @terrenceedmonds is not working on it?", "Yes, you can go ahead!" ]
1,610
1,621
1,621
CONTRIBUTOR
null
# 🚀 Feature request In `pipelines` tests and documentation they are recurringly named `nlp`, the goal is to rename them to something more appropriate. <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation ```python nlp = pipeline(task='conversational', model='XXX') ``` This is a bit pretentious as it does cover all NLP, and a better name help understanding to users too. For instance the `conversational` task pipeline could be named `conversational_agent`. Or maybe still a generic name but less pretentious `pipe`, `pipeline` (caveat: those are less clear in what they intend to achieve) The goal is to rename all occurences of `nlp` by better names within both tests and the documentation. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution The better names could be discussed here. <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9455/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9455/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9454
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9454/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9454/comments
https://api.github.com/repos/huggingface/transformers/issues/9454/events
https://github.com/huggingface/transformers/pull/9454
781,158,716
MDExOlB1bGxSZXF1ZXN0NTUwOTI5OTYy
9,454
[Docs] Improve model sharing doc
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "LGTM! Thanks for fixing" ]
1,610
1,610
1,610
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> #9431 was merged too early - I should have waited for @julien-c feedback. This PR corrects the docs accordingly. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9454/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9454", "html_url": "https://github.com/huggingface/transformers/pull/9454", "diff_url": "https://github.com/huggingface/transformers/pull/9454.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9454.patch", "merged_at": 1610016663000 }
https://api.github.com/repos/huggingface/transformers/issues/9453
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9453/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9453/comments
https://api.github.com/repos/huggingface/transformers/issues/9453/events
https://github.com/huggingface/transformers/pull/9453
781,155,931
MDExOlB1bGxSZXF1ZXN0NTUwOTI3NjY0
9,453
Prophetnet optimization
{ "login": "guillaume-be", "id": 27071604, "node_id": "MDQ6VXNlcjI3MDcxNjA0", "avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guillaume-be", "html_url": "https://github.com/guillaume-be", "followers_url": "https://api.github.com/users/guillaume-be/followers", "following_url": "https://api.github.com/users/guillaume-be/following{/other_user}", "gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}", "starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions", "organizations_url": "https://api.github.com/users/guillaume-be/orgs", "repos_url": "https://api.github.com/users/guillaume-be/repos", "events_url": "https://api.github.com/users/guillaume-be/events{/privacy}", "received_events_url": "https://api.github.com/users/guillaume-be/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "All slow tests are passing! Very nice PR - thanks a mille @guillaume-be " ]
1,610
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? This PR proposes an optimization for the ProphetNet model. The current implementation calculates an attention bias mask by looping through the position to unmask. It performs a high number of assignments (`ngram` * `sequence_length`) which can be in the order of ~1000. Single tensor assignments, especially on accelerators, are inefficient. This PR proposes a vectorized implementation which performs at most `ngram` assignments, which should be significantly lower than `ngram * sequence_length`. A quick experiment shown at https://gist.github.com/guillaume-be/e6b862c701fac1b54765e7af7e71c641 shows that: 1. this `ngram_attention_bias` calculation is very expensive, taking close to 230ms (!) on a GPU 2. the vectorized implementation is several orders of magnitude faster (the same calculation takes less than 1ms on the same example) ## Who can review? @patrickvonplaten maybe you would be a good candidate? I could not find anyone assigned for ProphetNet edit: pushed some further optimization, further accelerating by ~40%
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9453/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9453/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9453", "html_url": "https://github.com/huggingface/transformers/pull/9453", "diff_url": "https://github.com/huggingface/transformers/pull/9453.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9453.patch", "merged_at": 1610016119000 }
https://api.github.com/repos/huggingface/transformers/issues/9452
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9452/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9452/comments
https://api.github.com/repos/huggingface/transformers/issues/9452/events
https://github.com/huggingface/transformers/issues/9452
781,109,289
MDU6SXNzdWU3ODExMDkyODk=
9,452
Error when running run_clm.py on Python3.9/MacOS
{ "login": "liyucheng09", "id": 27999909, "node_id": "MDQ6VXNlcjI3OTk5OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyucheng09", "html_url": "https://github.com/liyucheng09", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "repos_url": "https://api.github.com/users/liyucheng09/repos", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems quite cryptic, but maybe @sgugger has already been confronted to that issue, so pinging him here.", "Never seen this before. There is some code in the HFArgumentParser to make it work with Python 3.9 that was added by @julien-c so maybe he has more insight?", "I want to provide more valuable information about this issue.\r\n\r\nThe field of the corresponding argument `--model_name_of_path` on my Mac/Python3.9 is like the following:\r\n\r\n```\r\nField(name='model_name_or_path',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object at 0x1065a6220>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': \"The model checkpoint for weights initialization.Don't set if you want to train a model from scratch.\"}),_field_type=_FIELD)\r\n```\r\n\r\nHowever, it is different on my PC/Python3.7.9.\r\n\r\n```\r\nField(name='model_name_or_path',type=typing.Union[str, NoneType],default=None,default_factory=<dataclasses._MISSING_TYPE object at 0x00000227D9888A48>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': \"The model checkpoint for weights initialization.Don't set if you want to train a model from scratch.\"}),_field_type=_FIELD)\r\n```\r\n\r\nThe critical change of it is the `type` attribute. The function in `transformers/data_classes.py` do not give `type=typing.Optional[str]` a appropriate solution.\r\n\r\nBut, I have no idea why the `type` attribute has that different value when I run it on Mac/Python3.9.1.", "#9479 will fix this I believe.", "Closed by #9479!" ]
1,610
1,610
1,610
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: macOS-11.0-arm64-arm-64bit - Python version: 3.9.1 - PyTorch version (GPU?): 1.8.0a0+c20b916 (False) - Tensorflow version (GPU?): not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [yes ] the official example scripts: (give details below) The tasks I am working on is: * [no] an official GLUE/SQUaD task: language-modeling task; dataset: wikitext ## To reproduce Steps to reproduce the behavior: 1. install transformers from the master branch of version 4.1.1 2. run examples/language-modeling/run_clm.py 3. arguments are as following: `--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir test-clm/` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` /Users/liyucheng/miniforge3/bin/python /Users/liyucheng/projects/comments_generation/run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir test-clm/ Traceback (most recent call last): File "/Users/liyucheng/projects/comments_generation/run_clm.py", line 388, in <module> main() File "/Users/liyucheng/projects/comments_generation/run_clm.py", line 145, in main parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) File "/Users/liyucheng/miniforge3/lib/python3.9/site-packages/transformers/hf_argparser.py", line 52, in __init__ self._add_dataclass_arguments(dtype) File "/Users/liyucheng/miniforge3/lib/python3.9/site-packages/transformers/hf_argparser.py", line 85, in _add_dataclass_arguments elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List): File "/Users/liyucheng/miniforge3/lib/python3.9/typing.py", line 829, in __subclasscheck__ return issubclass(cls, self.__origin__) TypeError: issubclass() arg 1 must be a class Process finished with exit code 1 ``` This error is bizarre cause it only occurs on my OSX and I cannot reproduce it on my PC. I think the main reason is about the decorator `dataset`, but I am not sure about that. Thanks for any helps.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9452/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9451
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9451/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9451/comments
https://api.github.com/repos/huggingface/transformers/issues/9451/events
https://github.com/huggingface/transformers/pull/9451
781,028,791
MDExOlB1bGxSZXF1ZXN0NTUwODIxODg3
9,451
[trainer] remove `--model_parallel`
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for putting it back. Since we're in a PR on this test alone, can we \"fix\" it to ignore the `args.model_parallel` argument? This argument will be removed/renamed (I'd prefer the first option as it's not useful) since peoples are confusing it with something that will enable `DataParallel`. The test can be replaced by `model.is_parallelizable and model.parallel` I believe, with the current API.", "2 things:\r\n\r\n1. you must be referring to `self.model_parallel`? But it will be always `False` unless `model.parallelize()` is called! \r\n\r\n So while you can rename the argument, you can't remove it, the user needs to activate this explicitly and the trainer then must activate MP with `model.parallelize()`\r\n \r\n Wrt `DataParallel`. Why are we turning it on automatically in first place? Why not make it manual and call it `--data_parallel` - no more confusion. Loud and clear:\r\n \r\n - `--model_parallel`\r\n - `--data_parallel`\r\n\r\n\r\n2. As we discovered last night current trainer doesn't work at all with --model_parallel - see https://github.com/huggingface/transformers/pull/9211#discussion_r553172405 there is no activation of that parallel mode - nobody calls `model.parallelize()` so it's very broken\r\n\r\nI change this code last night to;\r\n```\r\n if self.args.model_parallel:\r\n if model.is_parallelizable:\r\n model.parallelize()\r\n else:\r\n raise ValueError(\r\n f\"{model.__class__.__name__} implementation currently doesn't support model parallelism, therefore --model_parallel cl arg cannot be used\"\r\n )\r\n```\r\n\r\nand it doesn't work when I try:\r\n\r\n```\r\nrm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 2 --n_val 2 --n_test 2 --do_predict --model_parallel\r\n```\r\n\r\nIt doesn't look it ever worked...\r\n\r\ni.e. MP works when setup up manually but doesn't work in trainer.\r\n\r\np.s. I tagged you on that discussion - not sure if you saw it.", "> i.e. MP works when setup up manually but doesn't work in trainer.\r\n> As we discovered last night current trainer doesn't work at all with --model_parallel - see #9211 (comment) there is no activation of that parallel mode - nobody calls model.parallelize() so it's very broken\r\n\r\nThat's not a discovery on my side, that is exactly why I keep saying that the argument `--model_parallel` should be removed. It doesn't actually do anything and is confusing for the user. The call to `model.parallelize()` can always be done outside of `Trainer` IMO, which is why the test can be changed as suggested. We can think of integrating it inside the Trainer later, when the API is stable and actually used, for now I don't see the point of adding this.\r\n\r\n> Wrt DataParallel. Why are we turning it on automatically in first place? Why not make it manual and call it --data_parallel\r\n\r\nThat would be a big breaking change in the API, and beginners actually want to have the parallelism work out of the box when they have several GPUs, so I don't see why change something that works.", "> The call to model.parallelize() can always be done outside of Trainer IMO, which is why the test can be changed as suggested. \r\n\r\nIt doesn't work\r\n\r\n\r\n\r\n> Wrt DataParallel. Why are we turning it on automatically in first place? Why not make it manual and call it --data_parallel\r\n> \r\n> That would be a big breaking change in the API, and beginners actually want to have the parallelism work out of the box when they have several GPUs, so I don't see why change something that works.\r\n\r\nOK, then the flag should be there with the default On? Surely a user should be able not to run DP and it's not possible at the moment.", "OK, so I did remove `--model_parallel` - no problem in `trainer.py` since I used `model.is_parallelizable and model.parallel` instead - and I now understand that the point is that the user has to activate `model.parallelize()` themselves before passing the `model` to the trainer - i.e. no examples scripts will support MP at the moment.\r\n\r\nThe problem is `training_args.py` - how do I deal with:\r\n\r\n```\r\n if not self.model_parallel:\r\n train_batch_size = per_device_batch_size * max(1, self.n_gpu)\r\n else:\r\n train_batch_size = per_device_batch_size\r\n```\r\n\r\n`self` is args here, and there is no `trainer` object. Suggestions?\r\n\r\nBut I guess I need to first figure out how to make MP work in trainer at all, I doesn't look it was ever tried or tested. As it fails for me.", "FWIW, `--model_parallel` works just fine with my Bart MP PR: https://github.com/huggingface/transformers/pull/9384#issuecomment-756300194 in case someone needs it.\r\n\r\nI suspect t5 MP wasn't tested/made to work with `generate` tools (beam search, etc.) - **edit** It works now in this PR https://github.com/huggingface/transformers/pull/9323 - but super slow in beam search! ", "OK, I committed the bulk of it, and @sgugger will push some magic to deal with `training_args.py`\r\n\r\ntests should be failing I think until he does that. ", "So now I can see I can jokingly blame my initial mistake on @sgugger since he wanted it removed all along and so I unconsciously did it during rebasing and he unconsciously saw this as the right thing to do during the review ;) It's all Freud's fault anyway ;)", "I added a wrapped first, but it looked out of place so I introduced and documented a new attribute: `self.is_model_parallel` - hope it's loud and clear.", "@sgugger, I must be doing something wrong - that docstring section of `Important attributes` that I started in model_wrapped PR gets wrapped all funny - so I tried to add bullets and then it gets all messed up, as it bunches it all up into one paragraph. If I add new lines then `make docs` fails. Your magic touch is needed. Thank you.", "and here is why I removed `init=False` in https://github.com/huggingface/transformers/pull/9451/commits/a7a39216e99aae60238962ec3d6c96ecf23da42b\r\n\r\nThe tests were failing with:\r\n```\r\nTypeError: __init__() got an unexpected keyword argument '_n_gpu'\r\n```\r\nhttps://circle-production-customer-artifacts.s3.amazonaws.com/picard/forks/5bdabdd888af1f000130874a/278[…]cc8b6d6c390aab800d0cc1350f731a19529ac82f48\r\n", "Thank you for fixing the docs, @sgugger! " ]
1,609
1,610
1,610
CONTRIBUTOR
null
Per @sgugger's request removing `--model_parallel` in trainer, as it was never tested or made to work with the trainer. We will get back to it in the future. This PR doesn't introduce breaking changes, since `--model_parallel` never worked (well other than in my MP PRs that have been parked for now, since they are very inefficient and we are looking for a better approach, rather than waste time on sorting those out). @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9451/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9451/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9451", "html_url": "https://github.com/huggingface/transformers/pull/9451", "diff_url": "https://github.com/huggingface/transformers/pull/9451.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9451.patch", "merged_at": 1610375968000 }
https://api.github.com/repos/huggingface/transformers/issues/9450
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9450/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9450/comments
https://api.github.com/repos/huggingface/transformers/issues/9450/events
https://github.com/huggingface/transformers/issues/9450
781,010,285
MDU6SXNzdWU3ODEwMTAyODU=
9,450
Some layers of pretrained Albert model albert-base-v2 didn't match the architecture of AlbertForMaskedLM in latest transfomers 4.1.1.
{ "login": "BlueHeart0621", "id": 42397957, "node_id": "MDQ6VXNlcjQyMzk3OTU3", "avatar_url": "https://avatars.githubusercontent.com/u/42397957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BlueHeart0621", "html_url": "https://github.com/BlueHeart0621", "followers_url": "https://api.github.com/users/BlueHeart0621/followers", "following_url": "https://api.github.com/users/BlueHeart0621/following{/other_user}", "gists_url": "https://api.github.com/users/BlueHeart0621/gists{/gist_id}", "starred_url": "https://api.github.com/users/BlueHeart0621/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BlueHeart0621/subscriptions", "organizations_url": "https://api.github.com/users/BlueHeart0621/orgs", "repos_url": "https://api.github.com/users/BlueHeart0621/repos", "events_url": "https://api.github.com/users/BlueHeart0621/events{/privacy}", "received_events_url": "https://api.github.com/users/BlueHeart0621/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! When you speak of `unmatched layers`, do you mean the dropout layers? These layers have no weights.\r\n\r\nFurthermore, when setting the verbosity level to `INFO` and loading the `albert-base-v2` weights in current `master`'s `AlbertForMaskedLM`:\r\n\r\n```py\r\n>>> from transformers import AlbertForMaskedLM\r\n>>> from transformers import logging\r\n>>> logging.set_verbosity_info()\r\n>>> model = AlbertForMaskedLM.from_pretrained(\"albert-base-v2\")\r\n[...]\r\nAll model checkpoint weights were used when initializing AlbertForMaskedLM.\r\n\r\nAll the weights of AlbertForMaskedLM were initialized from the model checkpoint at albert-base-v2.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use AlbertForMaskedLM for predictions without further training.\r\n```\r\n\r\nThis tells you that all weights were correctly initialized. It would seem the issue comes from somewhere else, or maybe I have misunderstood your issue? Could you expand on how you identify the \"unmatched layers that cannot load pretrained parameters\"?", "Thanks. I made a mistake. The dropout layer is turely no weights. \r\nBut I still have a question. I am using the AlbertForMaskLM on cloth dataset, and I load the pretrained model albert-base-v2, the train accuracy is start from 0.28; I load the pretrained model albert-xxlarge-v2, the train accuracy is start from 0.79. Is it normal?\r\nThanks a lot.", "I do not have any experience with the CLOTH dataset, but taking a quick look at it it seems to be a cloze task, which is one of the pre-training objectives of the ALBERT model. It isn't surprising to me that the largest ALBERT model gets better results with no fine-tuning.", "Yes, the larger pretrained deserve better performance. But the base model is only start from 0.28, which mean just like randomly to choose answer in cloze(random is 0.25). And after convergence, the accuracy can reach 0.77. It seem to be the pretrained model doesn't learn any prior, just like from scratch. \r\nAnyway, thanks a lot." ]
1,609
1,610
1,610
NONE
null
albert: @LysandreJik ## Information Model I am using is Albert: The problem arises when using: * [x] my own modified scripts: (give details below) When I load pretrained albert-base-v2 model, I find some of the medata of the model.state_dict can not match the latest AlbertForMaskedLM model of transformers. And it seem to be that the pretrained model didn't repretrain after the albert code change. I find the AlbertAttention class in transformer 2.2.0 is: ```python class AlbertAttention(BertSelfAttention): def __init__(self, config): super(AlbertAttention, self).__init__(config) self.output_attentions = config.output_attentions self.num_attention_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.attention_head_size = config.hidden_size // config.num_attention_heads self.dropout = nn.Dropout(config.attention_probs_dropout_prob) ...... ``` It has one layer `self.dropout`. However, the AlbertAttention class in transformer 4.1.1 is: ```python class AlbertAttention(nn.Module): def __init__(self, config): super().__init__() if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): raise ValueError( "The hidden size (%d) is not a multiple of the number of attention " "heads (%d)" % (config.hidden_size, config.num_attention_heads) ) self.num_attention_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.attention_head_size = config.hidden_size // config.num_attention_heads self.all_head_size = self.num_attention_heads * self.attention_head_size self.query = nn.Linear(config.hidden_size, self.all_head_size) self.key = nn.Linear(config.hidden_size, self.all_head_size) self.value = nn.Linear(config.hidden_size, self.all_head_size) self.attention_dropout = nn.Dropout(config.attention_probs_dropout_prob) self.output_dropout = nn.Dropout(config.hidden_dropout_prob) ...... ``` It has two layer `self.attention_dropout ` and `self.output_dropout`. When I load pretrained model of albert, I find it still maintain the architecture of that in transformers 2.2.0. So those unmatched layer cannot load pretrained parameters, which make the model load from albert-base-v2 only has very low accuracy when training.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9450/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9450/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9449
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9449/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9449/comments
https://api.github.com/repos/huggingface/transformers/issues/9449/events
https://github.com/huggingface/transformers/pull/9449
781,006,611
MDExOlB1bGxSZXF1ZXN0NTUwODAzNjI5
9,449
[make fixup] a more reliable version of branching point discovery
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> why --fork-point doesn't work with the GitHub CLI\r\n\r\nI wasn't able to figure out what exactly those 2 tools do differently, but yes, `--fork-point` only works when specific conditions are met in the reflog, and it fails when some entries (the sha we are after) missing from it. I suppose `gh and` `git-pr` fetch just part of the reflog?\r\n\r\nApparently there are multiple causes. The first one is described at https://stackoverflow.com/a/53981615/9201239 and then it links to a discussion with additional causes.", "I see! Thank you, this is interesting!" ]
1,609
1,610
1,610
CONTRIBUTOR
null
This PR replaces: ``` git merge-base --fork-point master ``` with: ``` git merge-base master HEAD ``` in `utils/get_modified_files.py` (which is used by `make fixup`) As I reported in https://github.com/huggingface/transformers/issues/9425 the former method sometimes doesn't work when used with `gh pr checkout` or `git-pr`, rendering the relatively recently added git ` --fork-point` feature unreliable. I have re-tested and the new way works for any of: 1. `gh pr checkout` 2. `git-pr` 3. `git pr` 4. normal local git branch So this is what we are doing now to get only the modified files of the current branch: ``` git diff --name-only $(git merge-base master HEAD) ``` If we get complex branches that have various re-merges we will want to find not the most recent ancestor which the above gives, but the oldest ancestor - after some research found this: https://stackoverflow.com/a/4991675/9201239, which suggests: ``` diff --changed-group-format='' <(git rev-list --first-parent "${1:-master}") <(git rev-list --first-parent "${2:-HEAD}") | head -1 ``` and in the simple case where there is just one common ancestor it will find it too. So let's keep this as an option if you find the current solution isn't satisfactory. Fixes: #9425 @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9449/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9449", "html_url": "https://github.com/huggingface/transformers/pull/9449", "diff_url": "https://github.com/huggingface/transformers/pull/9449.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9449.patch", "merged_at": 1610012871000 }
https://api.github.com/repos/huggingface/transformers/issues/9448
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9448/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9448/comments
https://api.github.com/repos/huggingface/transformers/issues/9448/events
https://github.com/huggingface/transformers/issues/9448
781,004,861
MDU6SXNzdWU3ODEwMDQ4NjE=
9,448
Cannot use TransfoXLLMHeadModel with Trainer class because it returns a non scalar loss
{ "login": "gstranger", "id": 36181416, "node_id": "MDQ6VXNlcjM2MTgxNDE2", "avatar_url": "https://avatars.githubusercontent.com/u/36181416?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gstranger", "html_url": "https://github.com/gstranger", "followers_url": "https://api.github.com/users/gstranger/followers", "following_url": "https://api.github.com/users/gstranger/following{/other_user}", "gists_url": "https://api.github.com/users/gstranger/gists{/gist_id}", "starred_url": "https://api.github.com/users/gstranger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gstranger/subscriptions", "organizations_url": "https://api.github.com/users/gstranger/orgs", "repos_url": "https://api.github.com/users/gstranger/repos", "events_url": "https://api.github.com/users/gstranger/events{/privacy}", "received_events_url": "https://api.github.com/users/gstranger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "According to the error here, this seems to be because the ouptut of the `TransfoXLLMHeadModel` is not a scalar output. Taking a look at this model's loss output, named `losses`, it is an array of size `[bsz, tgt_len - 1]`.\r\n\r\nMaybe @TevenLeScao or @sgugger can chime in here at what the best procedure would be here, from a quick look the loss needs to be reduced. It seems this should be happening in the `Trainer` itself, but I'll let @sgugger decide.", "`TransfoXLLMHeadModel` is not compatible with `Trainer` as it does not output a loss. The model should be fixed to output one loss and not the losses, like all the other ones (which would be a breaking change).", "I see thank you for your replies. So to make this model compatible, I would need to create a custom `Trainer` class which overrides the `training_step` method and reduces the `losses` output to a scalar? How should I reduce the set? Would it be simpler to just train with a different causal language model from the library?", "I think it would be easier to use another model, in all honesty.\r\nIf you really want this one, you can use a subclass of `Trainer` and override the `compute_loss` function. There is an example of this in the [documentation](https://huggingface.co/transformers/main_classes/trainer.html). I think taking the mean would be a proper reduction.", "Thank you for your help. I've changed the title to better reflect the issue. You can close this ticket is you'd prefer this flagged a different way. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,609
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: Yes through Trainer class - Using distributed or parallel set-up in script?: No ### Who can help @TevenLeScao ## Information Model I am using: TransfoXLLMHeadModel The problem arises when using: - [ ] the official example scripts: (give details below) - [x] my own modified scripts: (give details below) The tasks I am working on is: - [x] my own task or dataset: (give details below) I am using a set of music data encoded as a language modeling problem. I have a Pytorch Dataset that returns a dictionary with the keys `input_ids`and `labels` from its `__getitem__` method which are 1D tensors that contain the example sequence to train on and predict. ## To reproduce Steps to reproduce the behavior: 1. Create a Pytorch dataset whose `__getitem__` method returns a dictionary with `input_ids` and `labels` with 1D Tensors ```python class ExampleDataset(Dataset): def __getitem__(self, index): sample = self.encodings[index] return {'input_ids': torch.tensor(sample.ids), 'labels': torch.tensor(sample.ids), 'mems': None} example_dataset = ExampleDataset() example_dataset[0] # { "input_ids": torch.tensor(0, 1,3 5, ... 330, 330), "labels": torch.tensor(0, 1, 3, 5, .. 330, 330) } # len: 512 len: 512 # '330' is pad token ``` 2. Instantiate the needed config and model ```python from transformers import TransfoXLConfig, TransfoXLLMHeadModel configuration = TransfoXLConfig( dropatt=0.1, vocab_size=len(tokenizer.get_vocab()), # Current size of vocab mem_len=512, # WordLevel tokenizers.Tokenizer d_inner=2048, n_layer=12, d_embed=512, n_head=8, d_head=64, cutoffs=[] ) test_conf = TransfoXLConfig(vocab_size=len(tokenizer.get_vocab())) model = TransfoXLLMHeadModel(configuration) model.resize_token_embeddings(len(tokenizer.get_vocab())) ``` 3. Instatiate TrainingArguments and Trainer, begin training ```python train_args = TrainingArguments( overwrite_output_dir=True, # Change this to continue training, ie load from checkpoint output_dir = 'example-train', do_train = True, num_train_epochs=2, per_device_train_batch_size=1, ) trainer = Trainer( model=model, args=train_args, train_dataset=example_dataset, ) trainer.train() ``` ```error /usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py:445: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.) indices_i = mask_i.nonzero().squeeze() --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-11-3435b262f1ae> in <module> ----> 1 trainer.train() /usr/local/lib/python3.6/dist-packages/transformers/trainer.py in train(self, model_path, trial) 773 tr_loss += self.training_step(model, inputs) 774 else: --> 775 tr_loss += self.training_step(model, inputs) 776 self._total_flos += self.floating_point_ops(inputs) 777 /usr/local/lib/python3.6/dist-packages/transformers/trainer.py in training_step(self, model, inputs) 1124 scaled_loss.backward() 1125 else: -> 1126 loss.backward() 1127 1128 return loss.detach() /usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 183 products. Defaults to ``False``. 184 """ --> 185 torch.autograd.backward(self, gradient, retain_graph, create_graph) 186 187 def register_hook(self, hook): /usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 119 grad_tensors = list(grad_tensors) 120 --> 121 grad_tensors = _make_grads(tensors, grad_tensors) 122 if retain_graph is None: 123 retain_graph = create_graph /usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in _make_grads(outputs, grads) 45 if out.requires_grad: 46 if out.numel() != 1: ---> 47 raise RuntimeError("grad can be implicitly created only for scalar outputs") 48 new_grads.append(torch.ones_like(out, memory_format=torch.preserve_format)) 49 else: RuntimeError: grad can be implicitly created only for scalar outputs ``` ## Expected behavior The model should be able to successfully complete training.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9448/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9448/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9447
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9447/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9447/comments
https://api.github.com/repos/huggingface/transformers/issues/9447/events
https://github.com/huggingface/transformers/issues/9447
780,988,538
MDU6SXNzdWU3ODA5ODg1Mzg=
9,447
urgent please help on memory issue during save
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @juliahane \r\n\r\nIt would be hard to answer without knowing the details \r\nCould you post the command that you are using, env info, which T5 model, training details etc ?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,609
1,619
1,619
NONE
null
Hi I am getting very large memory usage during saving model/evaluation of T5, resulting in job kill, this is very urgent as I lose access to train the models please help thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9447/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9447/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9446
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9446/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9446/comments
https://api.github.com/repos/huggingface/transformers/issues/9446/events
https://github.com/huggingface/transformers/pull/9446
780,887,632
MDExOlB1bGxSZXF1ZXN0NTUwNzA1ODI3
9,446
Transformers fast import part 2
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Just followed a bug back to this PR, wanted to send a message here since it seemed relevant to ping @sgugger \r\n\r\nThe check for version in file_utils.py:\r\n`if version.parse(sys.version) < version.parse(\"3.8\"):` doesn't seem to be reliable for me (or on multiple machines and images I have)\r\n\r\nSpecifically: \r\n```\r\nPython 3.8.5 (default, Aug 6 2020, 14:13:36) \r\n[GCC 9.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import sys\r\n>>> from packaging import version\r\n>>> sys.version\r\n'3.8.5 (default, Aug 6 2020, 14:13:36) \\n[GCC 9.3.0]'\r\n>>> version.parse(\"3.8.5\")\r\n<Version('3.8.5')>\r\n>>> version.parse(\"3.8.5\") < version.parse(\"3.8\") # expect false\r\nFalse\r\n>>> version.parse(sys.version) < version.parse(\"3.8\") # expect false\r\nTrue\r\n```\r\n\r\nInstead, it seems more reliable/functional to not rely on `packaging.version` at all and instead do `sys.version_info < (3, 8)`.\r\n\r\nI can also put in an Issue if that's a more appropriate way to raise / flag a concern. Just thought i'd ping here since i was able to trace it back to this PR from today. ", "Oh, thanks for reporting! Will add this to #9474 which should be merged tomorrow." ]
1,609
1,610
1,610
COLLABORATOR
null
# What does this PR do? This is the second test to allow a fast import of transformers by deferring the imports of dependencies when they are actually needed (almost but not quite, see below). It results in the line `import transformers` running in 239ms instead of 2.3s, so quite a nice speedup. To do this, the main init is changed to have a bid private dictionary that maps modules names to public object names instead of directly importing those objects. A submodule or object is then only imported when explicitly requested, which means the line `import transformers` by itself doesn't import any of the dependencies. This mechanism is incompatible with absolute imports inside the library, hence the big diff as I had to change quite a few `from transformers.xxx import yyy` to `from .xxx import yyy`. Also, this misses the last piece to be completely efficient: the intermediate init (in models) should use the mechanism to avoid importing TensorFlow when we only request a PyTorch model. This will be done in another PR as this one is already quite big by itself. The script that created the dummy objects needed some update because it used to parse the init. I took this opportunity to also refactor the dupe code. Obviously the templates also needed an update. The rework of the init also makes it important to have the intermediate init of `models` be nonempty, otherwise things like ``` import transformers auto_module = transformers.model.auto ``` will break. I don't think this is a big inconvenience (especially since the update template will fill this for the user).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9446/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9446/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9446", "html_url": "https://github.com/huggingface/transformers/pull/9446", "diff_url": "https://github.com/huggingface/transformers/pull/9446.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9446.patch", "merged_at": 1610030175000 }
https://api.github.com/repos/huggingface/transformers/issues/9445
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9445/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9445/comments
https://api.github.com/repos/huggingface/transformers/issues/9445/events
https://github.com/huggingface/transformers/issues/9445
780,782,192
MDU6SXNzdWU3ODA3ODIxOTI=
9,445
Loading fine-tuned models
{ "login": "chughe22", "id": 56651184, "node_id": "MDQ6VXNlcjU2NjUxMTg0", "avatar_url": "https://avatars.githubusercontent.com/u/56651184?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chughe22", "html_url": "https://github.com/chughe22", "followers_url": "https://api.github.com/users/chughe22/followers", "following_url": "https://api.github.com/users/chughe22/following{/other_user}", "gists_url": "https://api.github.com/users/chughe22/gists{/gist_id}", "starred_url": "https://api.github.com/users/chughe22/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chughe22/subscriptions", "organizations_url": "https://api.github.com/users/chughe22/orgs", "repos_url": "https://api.github.com/users/chughe22/repos", "events_url": "https://api.github.com/users/chughe22/events{/privacy}", "received_events_url": "https://api.github.com/users/chughe22/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, could you please provide all the information requested in the issue template? The environment is important, so is your code.\r\n\r\nWhich update did you do? To v4.1.1? From which version?\r\n\r\nThank you.", "Working in Google Colab, so the second to most recent version and then the most recent version. \r\nUsing BertForSequenceClassification and fine-tuning the model I'm trying to output and reload.\r\n\r\n```\r\n #Save a trained model, configuration and tokenizer using `save_pretrained()`.\r\nmodel_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training\r\nmodel_to_save.save_pretrained(output_dir)\r\ntokenizer.save_pretrained(output_dir)\r\n\r\n# Copy the model files to a directory in your Google Drive.\r\n!cp -r './model_source_450_v2/' \"./drive/My Drive\"\r\n\r\n```\r\nThen this code for a GPU node on a supercomputer which works on previously created model files\r\n```\r\nfrom transformers import AutoTokenizer, AutoModel\r\nimport torch\r\nimport random\r\n\r\n# setting device to GPU if available\r\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\r\nprint('using device: ', device)\r\nprint()\r\n\r\n\"\"\"model_source_450_v2 files\r\n\r\n* config.json\r\n* pytorch_model.bin\r\n* special_tokens_map.json\r\n* tokenizer_config.json\r\n* vocab.txt\r\n\"\"\"\r\n#set modelpath\r\nmodelpath = \"./model_source_450_v2\" #location of fully trained model\r\n\r\n\r\nfrom transformers import BertTokenizer, BertModel\r\n\r\n# Retrieve fine-tuned BERT.\r\nbert_model = BertModel.from_pretrained(modelpath,\r\n output_hidden_states = True) \r\nbert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\nbert_model.eval()\r\nbert_model.to(device)\r\n```", "Could you share your PyTorch versions as well? On both setups. PyTorch changed their saved models format, so you may have the issue of saving in a newer torch version (>= 1.6.0), and reloading in an older (<1.6.0) torch version", "on supercomputer: \"import torch; print(torch.__version__)\"\r\n1.7.0\r\non colab:\r\n1.7.0+cu101", "Hmmm, I'm having a hard time understanding what might be happening from the stack-trace. You wouldn't happen to have the entire stack-trace, would you? If you do, please share it. \r\n\r\nIs it possible the file was corrupted between saving and loading?", "I tried resaving and had the same issue.\r\n\r\n```\r\nusing device: cuda\r\n\r\nTraceback (most recent call last):\r\n File \"/afs/crc.nd.edu/user/d/dheryadi/mcob/lib/python3.8/site-packages/transformers/modeling_utils.py\", line 951, in from_pretrained\r\n state_dict = torch.load(resolved_archive_file, map_location=\"cpu\")\r\n File \"/afs/crc.nd.edu/user/d/dheryadi/mcob/lib/python3.8/site-packages/torch/serialization.py\", line 587, in load\r\n with _open_zipfile_reader(opened_file) as opened_zipfile:\r\n File \"/afs/crc.nd.edu/user/d/dheryadi/mcob/lib/python3.8/site-packages/torch/serialization.py\", line 242, in __init__\r\n super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))\r\nRuntimeError: [enforce fail at inline_container.cc:145] . PytorchStreamReader failed reading zip archive: failed finding central directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"gpu_v2.py\", line 77, in <module>\r\n bert_model = BertModel.from_pretrained(modelpath,\r\n File \"/afs/crc.nd.edu/user/d/dheryadi/mcob/lib/python3.8/site-packages/transformers/modeling_utils.py\", line 953, in from_pretrained\r\n raise OSError(\r\nOSError: Unable to load weights from pytorch checkpoint file for './model_source_450_v2' at './model_source_450_v2/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. \r\n```", "Do you manage to reload the checkpoint without moving it to the new \"supercomputer environment\" ? \r\n\r\nThe error seems to be with PyTorch rather than with Transformers given the error message: \r\n```\r\nPytorchStreamReader failed reading zip archive: failed finding central directory\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", " I have the same error, is anyone able to solve this problem\r\nline 477, in load_state_dict\r\n raise OSError(\r\nOSError: Unable to load weights from pytorch checkpoint file for '{Mydict}.cache\\huggingface\\transformers\\4a74c6c9128ba518e61fbdf559d03e64b6bd0ad6db588419dfd865ace535942a.a48b7b4437be34e24274c9cf6cf57e2424d3f1eec537ec03b905e6f01d19ed77' at '{Mydict}.cache\\huggingface\\transformers\\4a74c6c9128ba518e61fbdf559d03e64b6bd0ad6db588419dfd865ace535942a.a48b7b4437be34e24274c9cf6cf57e2424d3f1eec537ec03b905e6f01d19ed77'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.\r\nHey Please can you help me to solve this problem" ]
1,609
1,658
1,619
NONE
null
Since the transformers update, I am unable to load a newly trained model. OSError: Unable to load weights from pytorch checkpoint file for './model_source_450_v2' at './model_source_450_v2/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. Have tried to set "from_tf=True" but still not loading successfully The model is being created in pytorch
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9445/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9444
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9444/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9444/comments
https://api.github.com/repos/huggingface/transformers/issues/9444/events
https://github.com/huggingface/transformers/pull/9444
780,772,906
MDExOlB1bGxSZXF1ZXN0NTUwNjExNzYy
9,444
Fix init
{ "login": "patelrajnath", "id": 9987110, "node_id": "MDQ6VXNlcjk5ODcxMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/9987110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patelrajnath", "html_url": "https://github.com/patelrajnath", "followers_url": "https://api.github.com/users/patelrajnath/followers", "following_url": "https://api.github.com/users/patelrajnath/following{/other_user}", "gists_url": "https://api.github.com/users/patelrajnath/gists{/gist_id}", "starred_url": "https://api.github.com/users/patelrajnath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patelrajnath/subscriptions", "organizations_url": "https://api.github.com/users/patelrajnath/orgs", "repos_url": "https://api.github.com/users/patelrajnath/repos", "events_url": "https://api.github.com/users/patelrajnath/events{/privacy}", "received_events_url": "https://api.github.com/users/patelrajnath/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, let's wait for #9446 to be merged please :-)", "@LysandreJik yes, I'm not sure regarding the CI errors, as I'm not that much into it. I ran the tests in Pycharm, they passed. Can we check the Logs if we could find some clue there?\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,609
1,619
1,619
NONE
null
# What does this PR do? "RobertaPreTrainedModel" is missing in models' __init__.py. It is needed, in case we need to create a subclass of the same like "RobertaForTokenClassification". <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Model Cards: @julien-c -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9444/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9444", "html_url": "https://github.com/huggingface/transformers/pull/9444", "diff_url": "https://github.com/huggingface/transformers/pull/9444.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9444.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/9443
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9443/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9443/comments
https://api.github.com/repos/huggingface/transformers/issues/9443/events
https://github.com/huggingface/transformers/pull/9443
780,723,845
MDExOlB1bGxSZXF1ZXN0NTUwNTcxMzk4
9,443
[GenerationOutputs] Fix GenerationOutputs Tests
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This PR actually made me correct 2 bugs additionally:\r\n\r\n1) past_key_values for BertForCausalLM\r\n2) T5 should not return T5 cross attentions if just encoder model -> make sure encoder model has never `config.is_decoder=True`" ]
1,609
1,609
1,609
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The `GenerationOutputs` PR: https://github.com/huggingface/transformers/pull/9150 was not rebased, so that the cicrle ci on master is red now. This PR fixes it. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9443/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9443/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9443", "html_url": "https://github.com/huggingface/transformers/pull/9443", "diff_url": "https://github.com/huggingface/transformers/pull/9443.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9443.patch", "merged_at": 1609958223000 }
https://api.github.com/repos/huggingface/transformers/issues/9442
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9442/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9442/comments
https://api.github.com/repos/huggingface/transformers/issues/9442/events
https://github.com/huggingface/transformers/issues/9442
780,675,066
MDU6SXNzdWU3ODA2NzUwNjY=
9,442
[examples/text-classification] `do_predict` for the test set of local datasets
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users/forest1988/followers", "following_url": "https://api.github.com/users/forest1988/following{/other_user}", "gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forest1988/subscriptions", "organizations_url": "https://api.github.com/users/forest1988/orgs", "repos_url": "https://api.github.com/users/forest1988/repos", "events_url": "https://api.github.com/users/forest1988/events{/privacy}", "received_events_url": "https://api.github.com/users/forest1988/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As long as it's kept simple (we want short and focused example scripts, so that they are easy to understand and tweak), I don't mind adding this feature. Feel free to open a PR with your suggestions!", "Thanks, I'll try a modification as simple as possible, and if it can fix this issue without making the example difficult to understand, I’ll open a PR!" ]
1,609
1,610
1,610
CONTRIBUTOR
null
# 🚀 Feature request It seems that `run_glue.py` has the train set and the validation set management for local CSV/JSON files, but it doesn't have args for managing the test set of the local datasets. https://github.com/huggingface/transformers/blob/7a9f1b5c99e9a5d1772649d029acdf5160419239/examples/text-classification/run_glue.py#L90-L95 I think the script is intended to be used not only for the train/validation but the test, as `glue` tasks test sets are downloaded as shown in https://huggingface.co/docs/datasets/loading_datasets.html#selecting-a-configuration. It has the `--do_predict` option for the test sets. If there is no particular reason for not having the ability to read the test set in the local dataset, would it be ok for me to add the feature? Or is there some intention behind this implementation? ## Motivation I'd like to train, validate, and test my own local dataset. ## Your contribution I think some modifications like the below may help to add the feature. ``` python test_file: Optional[str] = field( default=None, metadata={"help": "A csv or a json file containing the test data."} ) ``` ``` python datasets = load_dataset( "csv", data_files={"train": data_args.train_file, "validation": data_args.validation_file, "test": data_args.test_file} ) ``` ``` python # if data_args.task_name is not None: # test_dataset = datasets["test_matched" if data_args.task_name == "mnli" else "test"] test_dataset = datasets["test_matched" if data_args.task_name == "mnli" else "test"] ``` Thank you in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9442/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9441
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9441/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9441/comments
https://api.github.com/repos/huggingface/transformers/issues/9441/events
https://github.com/huggingface/transformers/pull/9441
780,658,926
MDExOlB1bGxSZXF1ZXN0NTUwNTE3NDU3
9,441
Fast transformers import part 1
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "PR looks very clean! I'm no real import expert, so I'll leave it up to @LysandreJik and @sgugger :-) But I'm very much welcoming this change. I think it's even cleaner that the libraries are no public attributes even more", "re-based this into the deepspeed branch, and all was good until I tried:\r\n```\r\nfrom .integrations import is_deepspeed_available\r\n```\r\ninside `training_args.py`, and got:\r\n```\r\nTraceback (most recent call last):\r\n File \"./finetune_trainer.py\", line 23, in <module>\r\n from transformers import (\r\n File \"/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/__init__.py\", line 2092, in __getattr__\r\n return super().__getattr__(name)\r\n File \"/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/file_utils.py\", line 1452, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/__init__.py\", line 2086, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/trainer_seq2seq.py\", line 24, in <module>\r\n from .trainer import Trainer\r\n File \"/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/trainer.py\", line 32, in <module>\r\n from .integrations import ( # isort: split\r\n File \"/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/integrations.py\", line 55, in <module>\r\n from .trainer_callback import TrainerCallback # noqa: E402\r\n File \"/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/trainer_callback.py\", line 28, in <module>\r\n from .training_args import TrainingArguments\r\n File \"/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/training_args.py\", line 24, in <module>\r\n from .integrations import is_deepspeed_available\r\nImportError: cannot import name 'is_deepspeed_available' from partially initialized module 'transformers.integrations' (most likely due to a circular import) (/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/integrations.py)\r\n```\r\n\r\nI fixed that by moving the import to the middle of the file where I needed it.", "When I run `from transformers import DistilBertTokenizerFast ` I see imports of tensorflow and tensorboard, so what was the purpose of this PR? I only need a tokenizer, not asking for bloatware that every model in transformers uses.\r\ntransformers-4.14.1", "Thanks for raising the issue @evrial, this patch https://github.com/huggingface/transformers/pull/14855 will be released in v4.15 sometime this week.\r\n\r\nPlease open a new issue with the issue you're facing next time so that we may get to it faster.", "> Thanks for raising the issue @evrial, this patch #14855 will be released in v4.15 sometime this week.\r\n> \r\n> Please open a new issue with the issue you're facing next time so that we may get to it faster.\r\n\r\nThanks! God bless and merry Christmas!" ]
1,609
1,640
1,609
COLLABORATOR
null
# What does this PR do? This PR is the first step for a fast `import transformers`. It changes all the test for `is_xxx_available` to avoid importing `xxx` and makes sure all integrations are only imported when needed (apart from comet ml which needs to be imported first). The second test will be a bit more complex, to avoid importing torch and tf unless necessary, and will touch all inits like in [this repo](https://github.com/sgugger/lazy_init).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9441/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9441/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9441", "html_url": "https://github.com/huggingface/transformers/pull/9441", "diff_url": "https://github.com/huggingface/transformers/pull/9441.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9441.patch", "merged_at": 1609953444000 }
https://api.github.com/repos/huggingface/transformers/issues/9440
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9440/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9440/comments
https://api.github.com/repos/huggingface/transformers/issues/9440/events
https://github.com/huggingface/transformers/pull/9440
780,657,329
MDExOlB1bGxSZXF1ZXN0NTUwNTE2MTQ4
9,440
Remove nested lxmert
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,609
1,610
1,610
COLLABORATOR
null
# What does this PR do? Remove duplicate of LXMERT
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9440/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9440", "html_url": "https://github.com/huggingface/transformers/pull/9440", "diff_url": "https://github.com/huggingface/transformers/pull/9440.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9440.patch", "merged_at": 1610010642000 }
https://api.github.com/repos/huggingface/transformers/issues/9439
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9439/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9439/comments
https://api.github.com/repos/huggingface/transformers/issues/9439/events
https://github.com/huggingface/transformers/issues/9439
780,623,104
MDU6SXNzdWU3ODA2MjMxMDQ=
9,439
Adding Stochastic Weight Averaging to transformer optimizers
{ "login": "hasansalimkanmaz", "id": 49716619, "node_id": "MDQ6VXNlcjQ5NzE2NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/49716619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hasansalimkanmaz", "html_url": "https://github.com/hasansalimkanmaz", "followers_url": "https://api.github.com/users/hasansalimkanmaz/followers", "following_url": "https://api.github.com/users/hasansalimkanmaz/following{/other_user}", "gists_url": "https://api.github.com/users/hasansalimkanmaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/hasansalimkanmaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hasansalimkanmaz/subscriptions", "organizations_url": "https://api.github.com/users/hasansalimkanmaz/orgs", "repos_url": "https://api.github.com/users/hasansalimkanmaz/repos", "events_url": "https://api.github.com/users/hasansalimkanmaz/events{/privacy}", "received_events_url": "https://api.github.com/users/hasansalimkanmaz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "In the meantime, I have started to work on adding SWA to huggingface. After doing some experiments, If I get better results, I will create a PR. You can check my work from [here](https://github.com/hasansalimkanmaz/transformers/tree/add-SWA-optimizer)\r\n\r\nAny feedback will be appreciated.", "Based on my custom experiments, I couldn't produce better results with SWA. So I am closing this issue. My implementation for SWA is so custom and I didn't go through all tests. So, I will not create a PR due to the lack of benefits. " ]
1,609
1,610
1,610
CONTRIBUTOR
null
# 🚀 Feature request I would like to train my models with SWA optimizer. According to this [paper](https://arxiv.org/pdf/1803.05407.pdf), SWA leads to better models and wider optima. ## Motivation As humans, we are all willing to get better results :) . I think adding this feature will lead to better models without costing more and it may be easy to implement.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9439/reactions", "total_count": 6, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/9439/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9438
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9438/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9438/comments
https://api.github.com/repos/huggingface/transformers/issues/9438/events
https://github.com/huggingface/transformers/issues/9438
780,550,030
MDU6SXNzdWU3ODA1NTAwMzA=
9,438
Doc styling utils adds parasites new lines
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[ "@sgugger do you maybe have an idea here? Now that I see that it's Window I think this could be the reason", "Mmm, @jplu didn't have any issue with this I believe.\r\nSorry, reading again, you're not running make style but launching the script directly. There is nothing there to properly support Windows special line endings so it would probably require rewriting it from scratch to fix this. Can you use WSL for the styling?", "I can run properly `python utils/style_doc.py src/transformers docs/source --max_len 119` on Windows (not on WSL) without any error. In order to properly run all the make targets on Windows I use the steps given in the [CONTRIBUTING readme](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#develop-on-windows).", "Ah, there was one special magic option from GitHub for the line endings that could help, @LysandreJik do you remember which one? We had this problem with Niels on TAPAS.", "It is the `core.autocrlf` config, you can see how to set it in the doc https://git-scm.com/book/fr/v2/Personnalisation-de-Git-Configuration-de-Git", "Asked Lysandre and running `git config core.autocrlf false` solved the issue last time a contributor ran into it. @SBrandeis could you test if it does solve the issue for you? If that's the case, we'll add it to the `CONTRIBUTING` guide.", "Setting `git config core.autocrlf` to `false` did not solve my issue, neither did running the python util from WSL.\r\n ", "Just tried on my Windows laptop and I'm unable to reproduce, the line runs just fine on my side :-/, so it must be something else. The `newline=\"\\n\"` that jplu added everywhere we open a file should make it so that there is no different line endings problem in the first place, but there is still something here...\r\n\r\nFrom the diff, it comes from [this regex](https://github.com/huggingface/transformers/blob/1c19b423bf274a465f95725a79819bf82f71329e/utils/style_doc.py#L417) but there should be no weird `\\r` from Windows at this stage.\r\n\r\nAnyhow, will rework the regex as a for loop and it should work everywhere hopefully.", "(Not sure the PR above will actually fix the issue since I can't reproduce, please confirm if it does or no @SBrandeis )", "Hi @sgugger, thanks a lot for the PR.\r\nUnfortunately, it does not solve the issue on my side (the style_doc util still updates 196 files).\r\nSince none of @jplu, @patrickvonplaten and you can reproduce the issue, it must be related to my particular setup.\r\nNot sure what is the cause though, but I'll let you know if I figure this out !", "@SBrandeis as we are both on Windows, do you want we check that together offline?", "I actually forgot to push the changes in #9488 because it was Friday evening and my brain was dead :-/\r\nWill open a new PR.", "@SBrandeis #9516 actually contains the code I wanted you to test, so if you could try again on this branch?", "@jplu helped me troubleshoot this (thanks @jplu !)\r\nTurns out my `git` was misconfigured, running `git config --global core.autocrlf input` solved my issue 😓 \r\nI'll add a note in the `CONTRIBUTING.md` guide." ]
1,609
1,610
1,610
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.2.0dev0 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: Nope - Using distributed or parallel set-up in script?: Nope ### Who can help @sgugger ## Information Running the python util to style docs adds parasite new lines in every single docstring. See: ```bash $ python utils/style_doc.py src/transformers docs/source --max_len 119 --check_only Traceback (most recent call last): File "utils/style_doc.py", line 491, in <module> main(*args.files, max_len=args.max_len, check_only=args.check_only) File "utils/style_doc.py", line 479, in main raise ValueError(f"{len(changed)} files should be restyled!") ValueError: 345 files should be restyled! ``` See this commit for an example of what it does: https://github.com/huggingface/transformers/pull/9150/commits/b4dedd5ca25f043c66d12c774fa00a34c74dffb2 ## To reproduce Steps to reproduce the behavior: 1. Checkout and update master branch 2. run `python utils/style_doc.py src/transformers docs/source --max_len 119 --check-only` from transformers root Output: ```python Traceback (most recent call last): File "utils/style_doc.py", line 491, in <module> main(*args.files, max_len=args.max_len, check_only=args.check_only) File "utils/style_doc.py", line 479, in main raise ValueError(f"{len(changed)} files should be restyled!") ValueError: 345 files should be restyled! ``` It might have something to do with Windows or a particular setup of my machine because behavior cannot be reproduced by @patrickvonplaten. ## Expected behavior On master branch, documentation should not need to be restyled
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9438/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9437
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9437/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9437/comments
https://api.github.com/repos/huggingface/transformers/issues/9437/events
https://github.com/huggingface/transformers/issues/9437
780,537,958
MDU6SXNzdWU3ODA1Mzc5NTg=
9,437
Can't find pretrained model for TFPegasusForConditionalGeneration
{ "login": "demongolem", "id": 1395338, "node_id": "MDQ6VXNlcjEzOTUzMzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1395338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/demongolem", "html_url": "https://github.com/demongolem", "followers_url": "https://api.github.com/users/demongolem/followers", "following_url": "https://api.github.com/users/demongolem/following{/other_user}", "gists_url": "https://api.github.com/users/demongolem/gists{/gist_id}", "starred_url": "https://api.github.com/users/demongolem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/demongolem/subscriptions", "organizations_url": "https://api.github.com/users/demongolem/orgs", "repos_url": "https://api.github.com/users/demongolem/repos", "events_url": "https://api.github.com/users/demongolem/events{/privacy}", "received_events_url": "https://api.github.com/users/demongolem/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @demongolem, yes sadly those models were not yet uploaded in TF. Could you instead just run:\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom transformers import TFPegasusForConditionalGeneration, PegasusTokenizer\r\nsrc_text = [\r\n \"\"\" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow.\"\"\"\r\n]\r\n\r\nmodel_name = 'google/pegasus-xsum'\r\ntokenizer = PegasusTokenizer.from_pretrained(model_name)\r\nmodel = TFPegasusForConditionalGeneration.from_pretrained(model_name, from_pt=True)\r\n\r\nbatch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors=\"tf\")\r\ntranslated = model.generate(**batch)\r\ntgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n```\r\n\r\nAlso note that you have to pass `return_tensors=\"tf\"` in the tokenizer.", "Thanks @patrickvonplaten . The above code does do as necessary for me. Thanks for pointing out the `return_tensors` part as well, I missed that one." ]
1,609
1,609
1,609
NONE
null
## Environment info - `transformers` version: 4.1.1 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.8.0 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Pegasus: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): pegasus-xsum The problem arises when using: * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] my own task or dataset: (give details below) ## To reproduce Try to download model for TFPegasusForConditionalGeneration Steps to reproduce the behavior: 1. Choose pegasus-xsum model 2. Fetch pretrained model 3. ``` import tensorflow as tf from transformers import TFPegasusForConditionalGeneration, PegasusTokenizer src_text = [ """ PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow.""" ] model_name = 'google/pegasus-xsum' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = TFPegasusForConditionalGeneration.from_pretrained(model_name) batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest') translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) ``` ## Expected behavior The model should download. Instead the model cannot be found `404 Client Error: Not Found for url: https://huggingface.co/google/pegasus-xsum/resolve/main/tf_model.h5 `
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9437/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9436
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9436/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9436/comments
https://api.github.com/repos/huggingface/transformers/issues/9436/events
https://github.com/huggingface/transformers/issues/9436
780,497,600
MDU6SXNzdWU3ODA0OTc2MDA=
9,436
RuntimeError: The size of tensor a (128) must match the size of tensor b (32) at non-singleton dimension 1
{ "login": "aliebrahiiimi", "id": 50341433, "node_id": "MDQ6VXNlcjUwMzQxNDMz", "avatar_url": "https://avatars.githubusercontent.com/u/50341433?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aliebrahiiimi", "html_url": "https://github.com/aliebrahiiimi", "followers_url": "https://api.github.com/users/aliebrahiiimi/followers", "following_url": "https://api.github.com/users/aliebrahiiimi/following{/other_user}", "gists_url": "https://api.github.com/users/aliebrahiiimi/gists{/gist_id}", "starred_url": "https://api.github.com/users/aliebrahiiimi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aliebrahiiimi/subscriptions", "organizations_url": "https://api.github.com/users/aliebrahiiimi/orgs", "repos_url": "https://api.github.com/users/aliebrahiiimi/repos", "events_url": "https://api.github.com/users/aliebrahiiimi/events{/privacy}", "received_events_url": "https://api.github.com/users/aliebrahiiimi/received_events", "type": "User", "site_admin": false }
[ { "id": 1897896961, "node_id": "MDU6TGFiZWwxODk3ODk2OTYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Migration", "name": "Migration", "color": "e99695", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi, could you please provide either:\r\n- a reproducible code example\r\n- some more information related to your script. The error doesn't happen at the model load, but here: `File \"paragraph_selection/train.py\", line 293, in <module>`.\r\n- What is your `num_labels`\r\n\r\nIt's complicated to identify the issue here, but could you try replacing the following line:\r\n```py\r\nloss = model(input_ids, segment_ids, input_mask, label_ids)\r\n```\r\nwith:\r\n```py\r\nloss = model(input_ids, attention_mask=input_mask, token_type_ids=segment_ids, labels=label_ids)\r\n```", "yes, your code is correct\r\nthank you", "**RuntimeError: The size of tensor a (128) must match the size of tensor b (32) at non-singleton dimension 3**\r\nMain.py \r\n\r\n\r\nimport torch.nn as nn\r\nimport torch\r\nfrom torchvision import models\r\nfrom utils import save_net,load_net\r\n\r\nclass CSRNet(nn.Module):\r\n def __init__(self, load_weights=False):\r\n super(CSRNet, self).__init__()\r\n self.seen = 0\r\n self.frontend_feat = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512,512,512,'M']\r\n self.backend_feat = [512, 512, 512,256,128,64]\r\n self.frontend = make_layers(self.frontend_feat)\r\n self.backend = make_layers(self.backend_feat,in_channels = 512,dilation = True)\r\n self.output_layer = nn.Conv2d(64, 1, kernel_size=1)\r\n if not load_weights:\r\n mod = models.vgg16(pretrained = True)\r\n self._initialize_weights()\r\n for i in range(len(self.frontend.state_dict().items())):\r\n list(self.frontend.state_dict().items())[i][1].data[:] = list(mod.state_dict().items())[i][1].data[:]\r\n def forward(self,x):\r\n x = self.frontend(x)\r\n x = self.backend(x)\r\n x = self.output_layer(x)\r\n return x\r\n def _initialize_weights(self):\r\n for m in self.modules():\r\n if isinstance(m, nn.Conv2d):\r\n nn.init.normal_(m.weight, std=0.01)\r\n if m.bias is not None:\r\n nn.init.constant_(m.bias, 0)\r\n elif isinstance(m, nn.BatchNorm2d):\r\n nn.init.constant_(m.weight, 1)\r\n nn.init.constant_(m.bias, 0)\r\n \r\n \r\ndef make_layers(cfg, in_channels = 3,batch_norm=False,dilation = False):\r\n if dilation:\r\n d_rate = 2\r\n else:\r\n d_rate = 1\r\n layers = []\r\n for v in cfg:\r\n if v == 'M':\r\n layers += [nn.MaxPool2d(kernel_size=2, stride=2)]\r\n else:\r\n conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=d_rate,dilation = d_rate)\r\n if batch_norm:\r\n layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]\r\n else:\r\n layers += [conv2d, nn.ReLU(inplace=True)]\r\n in_channels = v\r\n return nn.Sequential(*layers) ", "Train.py\r\n\r\n\r\n\r\nimport sys\r\nimport os\r\n\r\nimport warnings\r\n\r\nfrom model import CSRNet\r\n\r\nfrom utils import save_checkpoint\r\n\r\nimport torch\r\nimport torch.nn as nn\r\nfrom torch.autograd import Variable\r\nfrom torchvision import datasets, transforms\r\n\r\nimport numpy as np\r\nimport argparse\r\nimport json\r\nimport cv2\r\nimport dataset\r\nimport time\r\n\r\nparser = argparse.ArgumentParser(description='PyTorch CSRNet')\r\n\r\nparser.add_argument('train_json', metavar='TRAIN',\r\n help='path to train json')\r\nparser.add_argument('test_json', metavar='TEST',\r\n help='path to test json')\r\n\r\nparser.add_argument('--pre', '-p', metavar='PRETRAINED', default=None,type=str,\r\n help='path to the pretrained model')\r\n\r\nparser.add_argument('gpu',metavar='GPU', type=str,\r\n help='GPU id to use.')\r\n\r\nparser.add_argument('task',metavar='TASK', type=str,\r\n help='task id to use.')\r\n\r\ndef main():\r\n \r\n global args,best_prec1\r\n \r\n best_prec1 = 1e6\r\n \r\n args = parser.parse_args()\r\n args.original_lr = 1e-7\r\n args.lr = 1e-7\r\n args.batch_size = 1\r\n args.momentum = 0.95\r\n args.decay = 5*1e-4\r\n args.start_epoch = 0\r\n args.epochs = 400\r\n args.steps = [-1,1,100,150]\r\n args.scales = [1,1,1,1]\r\n args.workers = 4\r\n args.seed = time.time()\r\n args.print_freq = 30\r\n with open(args.train_json, 'r') as outfile: \r\n train_list = json.load(outfile)\r\n with open(args.test_json, 'r') as outfile: \r\n val_list = json.load(outfile)\r\n \r\n os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu\r\n torch.cuda.manual_seed(args.seed)\r\n \r\n model = CSRNet()\r\n \r\n model = model.cuda()\r\n \r\n criterion = nn.MSELoss(size_average=False).cuda()\r\n \r\n optimizer = torch.optim.SGD(model.parameters(), args.lr,\r\n momentum=args.momentum,\r\n weight_decay=args.decay)\r\n\r\n if args.pre:\r\n if os.path.isfile(args.pre):\r\n print(\"=> loading checkpoint '{}'\".format(args.pre))\r\n checkpoint = torch.load(args.pre)\r\n args.start_epoch = checkpoint['epoch']\r\n best_prec1 = checkpoint['best_prec1']\r\n model.load_state_dict(checkpoint['state_dict'])\r\n optimizer.load_state_dict(checkpoint['optimizer'])\r\n print(\"=> loaded checkpoint '{}' (epoch {})\"\r\n .format(args.pre, checkpoint['epoch']))\r\n else:\r\n print(\"=> no checkpoint found at '{}'\".format(args.pre))\r\n \r\n for epoch in range(args.start_epoch, args.epochs):\r\n \r\n adjust_learning_rate(optimizer, epoch)\r\n \r\n train(train_list, model, criterion, optimizer, epoch)\r\n prec1 = validate(val_list, model, criterion)\r\n \r\n is_best = prec1 < best_prec1\r\n best_prec1 = min(prec1, best_prec1)\r\n print(' * best MAE {mae:.3f} '\r\n .format(mae=best_prec1))\r\n save_checkpoint({\r\n 'epoch': epoch + 1,\r\n 'arch': args.pre,\r\n 'state_dict': model.state_dict(),\r\n 'best_prec1': best_prec1,\r\n 'optimizer' : optimizer.state_dict(),\r\n }, is_best,args.task)\r\n\r\ndef train(train_list, model, criterion, optimizer, epoch):\r\n \r\n losses = AverageMeter()\r\n batch_time = AverageMeter()\r\n data_time = AverageMeter()\r\n \r\n \r\n train_loader = torch.utils.data.DataLoader(\r\n dataset.listDataset(train_list,\r\n shuffle=True,\r\n transform=transforms.Compose([\r\n transforms.ToTensor(),transforms.Normalize(mean=[0.485, 0.456, 0.406],\r\n std=[0.229, 0.224, 0.225]),\r\n ]), \r\n train=True, \r\n seen=model.seen,\r\n batch_size=args.batch_size,\r\n num_workers=args.workers),\r\n batch_size=args.batch_size)\r\n print('epoch %d, processed %d samples, lr %.10f' % (epoch, epoch * len(train_loader.dataset), args.lr))\r\n \r\n model.train()\r\n end = time.time()\r\n \r\n for i,(img, target)in enumerate(train_loader):\r\n data_time.update(time.time() - end)\r\n \r\n img = img.cuda()\r\n img = Variable(img)\r\n output = model(img)\r\n \r\n \r\n \r\n \r\n target = target.type(torch.FloatTensor).unsqueeze(0).cuda()\r\n target = Variable(target)\r\n \r\n \r\n loss = criterion(output, target)\r\n \r\n losses.update(loss.item(), img.size(0))\r\n optimizer.zero_grad()\r\n loss.backward()\r\n optimizer.step() \r\n \r\n batch_time.update(time.time() - end)\r\n end = time.time()\r\n \r\n if i % args.print_freq == 0:\r\n print('Epoch: [{0}][{1}/{2}]\\t'\r\n 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\\t'\r\n 'Data {data_time.val:.3f} ({data_time.avg:.3f})\\t'\r\n 'Loss {loss.val:.4f} ({loss.avg:.4f})\\t'\r\n .format(\r\n epoch, i, len(train_loader), batch_time=batch_time,\r\n data_time=data_time, loss=losses))\r\n \r\ndef validate(val_list, model, criterion):\r\n print ('begin test')\r\n test_loader = torch.utils.data.DataLoader(\r\n dataset.listDataset(val_list,\r\n shuffle=False,\r\n transform=transforms.Compose([\r\n transforms.ToTensor(),transforms.Normalize(mean=[0.485, 0.456, 0.406],\r\n std=[0.229, 0.224, 0.225]),\r\n ]), train=False),\r\n batch_size=args.batch_size) \r\n \r\n model.eval()\r\n \r\n mae = 0\r\n \r\n for i,(img, target) in enumerate(test_loader):\r\n img = img.cuda()\r\n img = Variable(img)\r\n output = model(img)\r\n \r\n mae += abs(output.data.sum()-target.sum().type(torch.FloatTensor).cuda())\r\n \r\n mae = mae/len(test_loader) \r\n print(' * MAE {mae:.3f} '\r\n .format(mae=mae))\r\n\r\n return mae \r\n \r\ndef adjust_learning_rate(optimizer, epoch):\r\n \"\"\"Sets the learning rate to the initial LR decayed by 10 every 30 epochs\"\"\"\r\n \r\n \r\n args.lr = args.original_lr\r\n \r\n for i in range(len(args.steps)):\r\n \r\n scale = args.scales[i] if i < len(args.scales) else 1\r\n \r\n \r\n if epoch >= args.steps[i]:\r\n args.lr = args.lr * scale\r\n if epoch == args.steps[i]:\r\n break\r\n else:\r\n break\r\n for param_group in optimizer.param_groups:\r\n param_group['lr'] = args.lr\r\n \r\nclass AverageMeter(object):\r\n \"\"\"Computes and stores the average and current value\"\"\"\r\n def __init__(self):\r\n self.reset()\r\n\r\n def reset(self):\r\n self.val = 0\r\n self.avg = 0\r\n self.sum = 0\r\n self.count = 0\r\n\r\n def update(self, val, n=1):\r\n self.val = val\r\n self.sum += val * n\r\n self.count += n\r\n self.avg = self.sum / self.count \r\n \r\nif __name__ == '__main__':\r\n main() ", "> Hi, could you please provide either:\r\n> \r\n> * a reproducible code example\r\n> * some more information related to your script. The error doesn't happen at the model load, but here: `File \"paragraph_selection/train.py\", line 293, in <module>`.\r\n> * What is your `num_labels`\r\n> \r\n> It's complicated to identify the issue here, but could you try replacing the following line:\r\n> \r\n> ```python\r\n> loss = model(input_ids, segment_ids, input_mask, label_ids)\r\n> ```\r\n> \r\n> with:\r\n> \r\n> ```python\r\n> loss = model(input_ids, attention_mask=input_mask, token_type_ids=segment_ids, labels=label_ids)\r\n> ```\r\n\r\nThank you very much for your help" ]
1,609
1,652
1,610
NONE
null
# 📚 Migration from pytorch-pretrained-bert to transfomers ## Information <!-- Important information --> Model I am using (Bert, XLNet ...): bert Language I am using the model on (English, Chinese ...):english The problem arises when using: * [ ] the official example scripts: (give details below) when I use this model my code work correctly ```py from pytorch_pretrained_bert.modeling import BertForSequenceClassification model = BertForSequenceClassification.from_pretrained(args.bert_model, num_labels=num_labels) ``` but when I change to below I get the error, what Is the problem ```py from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained('bert-base-cased', num_labels=num_labels) ``` ``` error : Traceback (most recent call last): File "paragraph_selection/train.py", line 293, in <module> loss = model(input_ids, segment_ids, input_mask, label_ids) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 1375, in forward return_dict=return_dict, File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 862, in forward input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 204, in forward embeddings += position_embeddings RuntimeError: The size of tensor a (128) must match the size of tensor b (32) at non-singleton dimension 1 ``` * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## Details <!-- A clear and concise description of the migration issue. If you have code snippets, please provide it here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code. --> ## Environment info colab - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: <!-- IMPORTANT: which version of the former library do you use? --> * `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): ## Checklist - [ ] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [ ] I checked if a related official extension example runs on my machine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9436/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9436/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9435
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9435/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9435/comments
https://api.github.com/repos/huggingface/transformers/issues/9435/events
https://github.com/huggingface/transformers/pull/9435
780,405,418
MDExOlB1bGxSZXF1ZXN0NTUwMzAzMjMz
9,435
Fix URLs to TAPAS notebooks
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,609
1,609
1,609
CONTRIBUTOR
null
# What does this PR do? As I updated the repository structure of my [Transformers tutorials](https://github.com/NielsRogge/Transformers-Tutorials) repository, some URLs related to TAPAS need to be updated. Thanks @mrm8488 for already updating one URL in #9413.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9435/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9435", "html_url": "https://github.com/huggingface/transformers/pull/9435", "diff_url": "https://github.com/huggingface/transformers/pull/9435.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9435.patch", "merged_at": 1609935642000 }
https://api.github.com/repos/huggingface/transformers/issues/9434
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9434/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9434/comments
https://api.github.com/repos/huggingface/transformers/issues/9434/events
https://github.com/huggingface/transformers/pull/9434
780,362,171
MDExOlB1bGxSZXF1ZXN0NTUwMjY0NTIw
9,434
Making Conversation possible to create directly a full conversation
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That's really cool! Also pinging @guillaume-be here as he might is the original author of the pipeline :-)", "Also, @Narsil do you if it's possible to have a chat not widget in the inference API for this pipeline? I think it would be really nice to place around Blenderbot and DialoGPT", "> Also, @Narsil do you if it's possible to have a chat not widget in the inference API for this pipeline? I think it would be really nice to place around Blenderbot and DialoGPT\r\n\r\n@patrickvonplaten it's in the pipes, but I've not yet created the widget for huggingface.co, the `api-inference` is ready though.\r\n\r\n@patrickvonplaten, @sgugger can you please re-review. There sort of major bug, where we used\r\n\r\n`tokenizer.encode(inputs, add_special_tokens=False)` so that BOS end EOS were **not** added on models that required them (instead EOS was added \"manually\" by the pipeline, leading to poor results on Blenderbot for instance).\r\n\r\nPing @mfuntowicz to make sure we can safely remove that or if there was a strong reason for bypassing tokenizer logic there.", "Also changed the tokenizer behavior to use a real one.", "Thanks for looping me in! It looks like there are a lot of changes, a few comments on my side:\r\n- regarding the change from \r\n```\r\ninputs = self.tokenizer(inputs, add_special_tokens=False, padding=False).get(\"input_ids\", [])\r\nfor input in inputs:\r\n input.append(self.tokenizer.eos_token_id)\r\n```\r\nto:\r\n```\r\ninputs = self.tokenizer(inputs, **kwargs).get(\"input_ids\", [])\r\n```\r\nare you sure that the behaviour remains correct for DialoGPT? As far as I know DialoGPT uses the GPT2 tokenizer that does not add a `eos` automatically at the end of the encoded input. Test for BlenderBot were added in https://github.com/huggingface/transformers/blob/74f6f91a9dc944b1f8872a0d22abd60050aa41bc/tests/test_pipelines_conversational.py#L102 and I did not observe a poor performance back then - did something change? Also note that BlenderBot does not seem to require a BOS token (https://github.com/huggingface/transformers/blob/f33a6f34461fea61b579a7ec732fcd174b2b41cd/src/transformers/models/blenderbot/tokenization_blenderbot.py#L57)\r\n- The `if len(new_input) > max_length - self.min_length_for_response` was set-up to allow the history to leave some space for future responses. Is this now done as part of the history further capabilities?\r\n- Could you please clarify the need for `_get_history` instead of accessing the history directly?\r\n- Regarding the title of the PR, if you are interested I added this feature to the Rust version of this pipeline a few months ago. The approach seems simpler than the changes proposed here, am I missing something? See https://github.com/guillaume-be/rust-bert/blob/7890d2daffea8e2c792a2e8930294e403b2321dd/src/pipelines/conversation.rs#L416 for reference (I see from your activity that you are familiar with Rust!)\r\n\r\nThanks!", "Hi @guillaume-be ,\r\n\r\nThose changes do not belong in this PR anyway, I'll make a separate PR following this one, we should continue the discussion over there.", "It seems the tests are failing in `master` since this merge: https://app.circleci.com/pipelines/github/huggingface/transformers/18333/workflows/72042bfe-4d42-42de-8389-bc0d1cc5494c/jobs/148896", "Yes, was missing a rebase before test, another commit introduced a new warning, which broke the test.\r\n\r\nI am not sure what's the strategy concerning warnings and tests. I've tried to be conservative (meaning explicitly testing them), but I know it might become cumbersome at some point, I can remove those checks if needed." ]
1,609
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? - Currently conversations contain some state (`conversation.history` namely). - There is no obvious way to create a conversation from pure logs aside from mutating state. - The actual result is still buggy because `history` is not correctly updated by the Conversation object. Objectives of this PR: - Enable creation of a Conversation from existing exchanges. ```Conversation("Why do you recommend it ?", past_user_inputs=["Can you recommend a book ?"], generated_responses=["I recommend reading the Lord of the Rings."])``` - Keep relatively close to previous code. - Fix the bug, that simply discarded history if you created a Conversation through mutation of state. (**Could be backward incompat**) - `history` renamed `_history` + `_index` as it's now treated as a cache variable (namely to prevent recreating tokens of the conversation all the time. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @mfuntowicz @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9434/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9434/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9434", "html_url": "https://github.com/huggingface/transformers/pull/9434", "diff_url": "https://github.com/huggingface/transformers/pull/9434.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9434.patch", "merged_at": 1610112806000 }
https://api.github.com/repos/huggingface/transformers/issues/9433
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9433/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9433/comments
https://api.github.com/repos/huggingface/transformers/issues/9433/events
https://github.com/huggingface/transformers/pull/9433
780,341,509
MDExOlB1bGxSZXF1ZXN0NTUwMjQ1NTA0
9,433
Removing duplicated code for Translation,Summarization and Text2TextGeneration pipelines
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Feel free to merge whenever @Narsil " ]
1,609
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? `TranslationPipeline`, `SummarizationPipeline` and `Text2TextGenerationPipeline` share quite a bit of code for the generation part. This PR aims to remove that code duplication to prevent future errors in argument handling while preserving, documentation for all methods and functions and the full behavior. Translation and Summarization now inherit from Text2TextGenerationPipeline. They retain their own docstrings to be more readable in the docs. New function `check_inputs` has appeared which does all the current variation between the 3 classes, basically by raising different warnings based on inputs and underlying model config. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik @sgugger Edit: Sorry about the diff, just gave a look, it's totally unreadable mostly because I reordered the classes so that the Base classe (Text2TextGenerationPipeline) is before the subclasses. I'll happily switch that back to make review on the actual code easier (and maybe change back later the order for cleaner code in the end) Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9433/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9433/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9433", "html_url": "https://github.com/huggingface/transformers/pull/9433", "diff_url": "https://github.com/huggingface/transformers/pull/9433.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9433.patch", "merged_at": 1610057417000 }
https://api.github.com/repos/huggingface/transformers/issues/9432
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9432/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9432/comments
https://api.github.com/repos/huggingface/transformers/issues/9432/events
https://github.com/huggingface/transformers/pull/9432
780,312,636
MDExOlB1bGxSZXF1ZXN0NTUwMjE4NjEz
9,432
Enable TruncationStrategy override for pipelines
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,609
1,610
1,610
CONTRIBUTOR
null
# What does this PR do? Right now truncation argument for tokenizers was not overridable, which leads to poor UX on some pipelines, most notably Summarization. Summaries trigger an error on text that end up with too many tokens for the underlying model. Current strategy is just to enable the argument to be overrided as truncating by default is not necessarily good either. More complex strategies are required to "solve" the problem (chunk original text into chunk of ~max_length, drop if some chunk is small enough <0.1 max_length?, then concatenate result summaries ?). The current PR is a small step in that direction. There should not be any backward incompatibilities with current changes. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> @LysandreJik @patrickvonplaten ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9432/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9432", "html_url": "https://github.com/huggingface/transformers/pull/9432", "diff_url": "https://github.com/huggingface/transformers/pull/9432.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9432.patch", "merged_at": 1610375009000 }
https://api.github.com/repos/huggingface/transformers/issues/9431
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9431/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9431/comments
https://api.github.com/repos/huggingface/transformers/issues/9431/events
https://github.com/huggingface/transformers/pull/9431
780,279,176
MDExOlB1bGxSZXF1ZXN0NTUwMTg3MDI1
9,431
[Docs] Add useful links to model sharing
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,609
1,609
1,609
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR extends the **model_sharing** doc by two additional links that point to helper scripts to more efficiently change multiple configs and upload organization-specific repos. Since some people have been asking for these kinds of scripts, I think it makes sense to link them here. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9431/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/9431", "html_url": "https://github.com/huggingface/transformers/pull/9431", "diff_url": "https://github.com/huggingface/transformers/pull/9431.diff", "patch_url": "https://github.com/huggingface/transformers/pull/9431.patch", "merged_at": 1609940216000 }
https://api.github.com/repos/huggingface/transformers/issues/9430
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9430/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9430/comments
https://api.github.com/repos/huggingface/transformers/issues/9430/events
https://github.com/huggingface/transformers/issues/9430
780,219,489
MDU6SXNzdWU3ODAyMTk0ODk=
9,430
T5 base use a lot of memory to train on
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @juliahane \r\nCould you post the training command ? \r\nbatch size 32 is total batch size or `per_device_batch_size` ?\r\n\r\nIn my experiments I was able to use max 4 `per_device_batch_size` with `max_input_length` 512 and `max_target_length` 64.\r\n\r\nAlso, this kind of question should be asked on the[ forum ](https://discuss.huggingface.co/t/t5-finetuning-tips/684) as it's not a bug or issue.\r\n\r\n[This](https://discuss.huggingface.co/t/t5-finetuning-tips/684) discussion might help.", "this is per device batch size. max_length = 128\nto me this is a bug that the model requires this much memory\nfor small I can run the same thing with batch size =64\n\nOn Wed, Jan 6, 2021 at 8:40 AM Suraj Patil <[email protected]> wrote:\n\n> Hi @juliahane <https://github.com/juliahane>\n> Could you post the training command ?\n> batch size 32 is total batch size or per_device_batch_size ?\n>\n> In my experiments I was able to use max 4 per_device_batch_size with\n> max_input_length 512 and max_target_length 64.\n>\n> Also, this kind of question should be asked on the forum\n> <https://discuss.huggingface.co/t/t5-finetuning-tips/684> as it's not a\n> bug or issue.\n>\n> This <https://discuss.huggingface.co/t/t5-finetuning-tips/684> discussion\n> might help.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/9430#issuecomment-755164832>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AM3GZM72EZRVYSEBPLPVBF3SYQOXJANCNFSM4VXEGT3A>\n> .\n>\n", "coud you tell me if Adafactor save on memory?\n\nOn Wed, Jan 6, 2021 at 8:40 AM Suraj Patil <[email protected]> wrote:\n\n> Hi @juliahane <https://github.com/juliahane>\n> Could you post the training command ?\n> batch size 32 is total batch size or per_device_batch_size ?\n>\n> In my experiments I was able to use max 4 per_device_batch_size with\n> max_input_length 512 and max_target_length 64.\n>\n> Also, this kind of question should be asked on the forum\n> <https://discuss.huggingface.co/t/t5-finetuning-tips/684> as it's not a\n> bug or issue.\n>\n> This <https://discuss.huggingface.co/t/t5-finetuning-tips/684> discussion\n> might help.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/9430#issuecomment-755164832>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AM3GZM72EZRVYSEBPLPVBF3SYQOXJANCNFSM4VXEGT3A>\n> .\n>\n", "Hi @juliahane it is indeed the case that adafactor improves memory usage, which is why the original author uses it. You can check out the [paper](https://arxiv.org/abs/1804.04235) on adafactor for more info, but the abstract says the most. My intuition here is that adafactor (or similar memory-efficient optimizer) is required to train the large t5 models.", "thank you, very helpful, I will try it.\n\nOn Wed, Jan 6, 2021 at 11:39 AM Kenneth Enevoldsen <[email protected]>\nwrote:\n\n> Hi @juliahane <https://github.com/juliahane> it is indeed the case that\n> adafactor improves memory usage, which is why the original author uses it.\n> You can check out the paper <https://arxiv.org/abs/1804.04235> on\n> adafactor for more info, but the abstract says the most. My intuition here\n> is that adafactor (or similar memory-efficient optimizer) is required to\n> train the large t5 models.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/9430#issuecomment-755223491>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AM3GZM7WKUR23L23FVKAMZTSYQ4UZANCNFSM4VXEGT3A>\n> .\n>\n", "In the thread they say to set autoscaling to off, do you know @kenneth how\nI can do it?\nApart from this, I could not find more suggestion for saving memory on GPU\nin that thread\nthanks\n\nOn Wed, Jan 6, 2021 at 2:13 PM julia hane <[email protected]> wrote:\n\n> thank you, very helpful, I will try it.\n>\n> On Wed, Jan 6, 2021 at 11:39 AM Kenneth Enevoldsen <\n> [email protected]> wrote:\n>\n>> Hi @juliahane <https://github.com/juliahane> it is indeed the case that\n>> adafactor improves memory usage, which is why the original author uses it.\n>> You can check out the paper <https://arxiv.org/abs/1804.04235> on\n>> adafactor for more info, but the abstract says the most. My intuition here\n>> is that adafactor (or similar memory-efficient optimizer) is required to\n>> train the large t5 models.\n>>\n>> —\n>> You are receiving this because you were mentioned.\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/transformers/issues/9430#issuecomment-755223491>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/AM3GZM7WKUR23L23FVKAMZTSYQ4UZANCNFSM4VXEGT3A>\n>> .\n>>\n>\n", "I assume it is the:\r\n```\r\nscale_parameter (bool, optional, defaults to True) – If True, learning rate is scaled by root mean square\r\n```\r\nin the adafactor ([documentation](https://huggingface.co/transformers/main_classes/optimizer_schedules.html#adafactor-pytorch))\r\n\r\nbut maybe @patil-suraj could confirm this?\r\n\r\nBut as they used the scaling in the original paper I couldn't imagine it to be highly influential.", "For reference the optimizer I use is:\r\n```\r\noptimizer = transformers.Adafactor(model.parameters(), lr=0.001,\r\n relative_step=False, warmup_init=False, \r\n decay_rate=0.0, clip_threshold=1.0)\r\nscheduler = None\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,609
1,619
1,619
NONE
null
Hi I am using transformers 3.5.1, t5-base uses a lot of memory I cannot even train it on 4 V100 GPUs with batch size of 32 could you elaborate is there is any memory issue with this model? thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9430/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9430/timeline
completed
null
null