url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/8221 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8221/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8221/comments | https://api.github.com/repos/huggingface/transformers/issues/8221/events | https://github.com/huggingface/transformers/issues/8221 | 733,985,169 | MDU6SXNzdWU3MzM5ODUxNjk= | 8,221 | [GPT2] Loss NaN after some time with FP16 | {
"login": "ksjae",
"id": 17930170,
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksjae",
"html_url": "https://github.com/ksjae",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"repos_url": "https://api.github.com/users/ksjae/repos",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"We can't really help without seeing the code you are running. Some of the models do not support FP16 for instance, and we have no idea which model you are using.",
"Oh, sorry. It's the example script ```examples/language_modeling/run_language_modeling.py``` but with modified data loader.\r\n\r\nfull code below:\r\n```\r\nimport logging\r\nimport math\r\nimport os\r\nimport glob\r\nimport datasets\r\nfrom dataclasses import dataclass, field\r\nfrom typing import Optional\r\n\r\nfrom datasets import list_datasets, load_dataset\r\n\r\nfrom transformers import (\r\n CONFIG_MAPPING,\r\n MODEL_WITH_LM_HEAD_MAPPING,\r\n AutoConfig,\r\n AutoModelWithLMHead,\r\n AutoTokenizer,\r\n DataCollatorForLanguageModeling,\r\n DataCollatorForPermutationLanguageModeling,\r\n HfArgumentParser,\r\n LineByLineTextDataset,\r\n PreTrainedTokenizer,\r\n TextDataset,\r\n Trainer,\r\n TrainingArguments,\r\n set_seed,\r\n)\r\n\r\n\r\nlogger = logging.getLogger(__name__)\r\n\r\n\r\nMODEL_CONFIG_CLASSES = list(MODEL_WITH_LM_HEAD_MAPPING.keys())\r\nMODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)\r\n\r\n\r\n@dataclass\r\nclass ModelArguments:\r\n \"\"\"\r\n Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.\r\n \"\"\"\r\n\r\n model_name_or_path: Optional[str] = field(\r\n default=None,\r\n metadata={\r\n \"help\": \"The model checkpoint for weights initialization. Leave None if you want to train a model from scratch.\"\r\n },\r\n )\r\n model_type: Optional[str] = field(\r\n default=None,\r\n metadata={\"help\": \"If training from scratch, pass a model type from the list: \" + \", \".join(MODEL_TYPES)},\r\n )\r\n config_name: Optional[str] = field(\r\n default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\r\n )\r\n tokenizer_name: Optional[str] = field(\r\n default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\r\n )\r\n cache_dir: Optional[str] = field(\r\n default=None, metadata={\"help\": \"Where do you want to store the pretrained models downloaded from s3\"}\r\n )\r\n\r\n\r\n@dataclass\r\nclass DataTrainingArguments:\r\n \"\"\"\r\n Arguments pertaining to what data we are going to input our model for training and eval.\r\n \"\"\"\r\n\r\n train_data_file: Optional[str] = field(\r\n default=None, metadata={\"help\": \"The input training data file (a text file).\"}\r\n )\r\n eval_data_file: Optional[str] = field(\r\n default=None,\r\n metadata={\"help\": \"An optional input evaluation data file to evaluate the perplexity on (a text file).\"},\r\n )\r\n line_by_line: bool = field(\r\n default=False,\r\n metadata={\"help\": \"Whether distinct lines of text in the dataset are to be handled as distinct sequences.\"},\r\n )\r\n\r\n mlm: bool = field(\r\n default=False, metadata={\"help\": \"Train with masked-language modeling loss instead of language modeling.\"}\r\n )\r\n mlm_probability: float = field(\r\n default=0.15, metadata={\"help\": \"Ratio of tokens to mask for masked language modeling loss\"}\r\n )\r\n plm_probability: float = field(\r\n default=1 / 6,\r\n metadata={\r\n \"help\": \"Ratio of length of a span of masked tokens to surrounding context length for permutation language modeling.\"\r\n },\r\n )\r\n max_span_length: int = field(\r\n default=5, metadata={\"help\": \"Maximum length of a span of masked tokens for permutation language modeling.\"}\r\n )\r\n\r\n block_size: int = field(\r\n default=-1,\r\n metadata={\r\n \"help\": \"Optional input sequence length after tokenization.\"\r\n \"The training dataset will be truncated in block of this size for training.\"\r\n \"Default to the model max input length for single sentence inputs (take into account special tokens).\"\r\n },\r\n )\r\n overwrite_cache: bool = field(\r\n default=False, metadata={\"help\": \"Overwrite the cached training and evaluation sets\"}\r\n )\r\n arrow: bool = field(\r\n default=True,\r\n metadata={\r\n \"help\": \"Use Arrow-based HF NLP for optimization.\"\r\n },\r\n )\r\n\r\n\r\ndef get_dataset(\r\n args: DataTrainingArguments,\r\n tokenizer: PreTrainedTokenizer,\r\n evaluate: bool = False,\r\n cache_dir: Optional[str] = \"./cache\",\r\n):\r\n tokenizer.pad_token = \"<|endoftext|>\"\r\n tokenizer._pad_token = \"<|endoftext|>\"\r\n #tokenizer.pad_token_id = 50256\r\n file_path = args.eval_data_file if evaluate else args.train_data_file\r\n if True:\r\n dataset = datasets.load_from_disk(file_path)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n return dataset\r\n \r\n if False:\r\n dataset = load_dataset(\"text\", data_files=[file_path], split='train')\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n dataset.save_to_disk(file_path+'.arrow')\r\n return dataset\r\n \r\n if args.line_by_line:\r\n return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)\r\n else:\r\n return TextDataset(\r\n tokenizer=tokenizer,\r\n file_path=file_path,\r\n block_size=args.block_size,\r\n overwrite_cache=args.overwrite_cache,\r\n cache_dir=cache_dir,\r\n )\r\n \"\"\"\r\n dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n return dataset\r\n \"\"\"\r\n\r\ndef main():\r\n # See all possible arguments in src/transformers/training_args.py\r\n # or by passing the --help flag to this script.\r\n # We now keep distinct sets of args, for a cleaner separation of concerns.\r\n\r\n parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\r\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n\r\n if data_args.eval_data_file is None and training_args.do_eval:\r\n raise ValueError(\r\n \"Cannot do evaluation without an evaluation data file. Either supply a file to --eval_data_file \"\r\n \"or remove the --do_eval argument.\"\r\n )\r\n\r\n if (\r\n os.path.exists(training_args.output_dir)\r\n and os.listdir(training_args.output_dir)\r\n and training_args.do_train\r\n and not training_args.overwrite_output_dir\r\n ):\r\n raise ValueError(\r\n f\"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome.\"\r\n )\r\n\r\n # Setup logging\r\n logging.basicConfig(\r\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\r\n datefmt=\"%m/%d/%Y %H:%M:%S\",\r\n level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,\r\n )\r\n logger.warning(\r\n \"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s\",\r\n training_args.local_rank,\r\n training_args.device,\r\n training_args.n_gpu,\r\n bool(training_args.local_rank != -1),\r\n training_args.fp16,\r\n )\r\n logger.info(\"Training/evaluation parameters %s\", training_args)\r\n\r\n # Set seed\r\n set_seed(training_args.seed)\r\n\r\n # Load pretrained model and tokenizer\r\n #\r\n # Distributed training:\r\n # The .from_pretrained methods guarantee that only one local process can concurrently\r\n # download model & vocab.\r\n\r\n if model_args.config_name:\r\n config = AutoConfig.from_pretrained(model_args.config_name, cache_dir=model_args.cache_dir)\r\n elif model_args.model_name_or_path:\r\n config = AutoConfig.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)\r\n else:\r\n config = CONFIG_MAPPING[model_args.model_type]()\r\n logger.warning(\"You are instantiating a new config instance from scratch.\")\r\n\r\n if model_args.tokenizer_name:\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n elif model_args.model_name_or_path:\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)\r\n else:\r\n raise ValueError(\r\n \"You are instantiating a new tokenizer from scratch. This is not supported, but you can do it from another script, save it,\"\r\n \"and load it from here, using --tokenizer_name\"\r\n )\r\n\r\n tokenizer.pad_token = \"<|endoftext|>\"\r\n tokenizer._pad_token = \"<|endoftext|>\"\r\n\r\n if model_args.model_name_or_path:\r\n model = AutoModelWithLMHead.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n config=config,\r\n cache_dir=model_args.cache_dir,\r\n )\r\n else:\r\n logger.info(\"Training new model from scratch\")\r\n model = AutoModelWithLMHead.from_config(config)\r\n\r\n model.resize_token_embeddings(len(tokenizer))\r\n\r\n if config.model_type in [\"bert\", \"roberta\", \"distilbert\", \"camembert\"] and not data_args.mlm:\r\n raise ValueError(\r\n \"BERT and RoBERTa-like models do not have LM heads but masked LM heads. They must be run using the\"\r\n \"--mlm flag (masked language modeling).\"\r\n )\r\n\r\n if data_args.block_size <= 0:\r\n data_args.block_size = tokenizer.max_len\r\n # Our input block size will be the max possible for the model\r\n else:\r\n data_args.block_size = min(data_args.block_size, tokenizer.max_len)\r\n\r\n # Get datasets\r\n\r\n train_dataset = (\r\n get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None\r\n )\r\n eval_dataset = (\r\n get_dataset(data_args, tokenizer=tokenizer, evaluate=True, cache_dir=model_args.cache_dir)\r\n if training_args.do_eval\r\n else None\r\n )\r\n if config.model_type == \"xlnet\":\r\n data_collator = DataCollatorForPermutationLanguageModeling(\r\n tokenizer=tokenizer,\r\n plm_probability=data_args.plm_probability,\r\n max_span_length=data_args.max_span_length,\r\n )\r\n else:\r\n data_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=data_args.mlm, mlm_probability=data_args.mlm_probability\r\n )\r\n\r\n # Initialize our Trainer\r\n trainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=train_dataset,\r\n eval_dataset=eval_dataset,\r\n prediction_loss_only=True,\r\n )\r\n\r\n # Training\r\n if training_args.do_train:\r\n model_path = (\r\n model_args.model_name_or_path\r\n if model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path)\r\n else None\r\n )\r\n trainer.train(model_path=model_path)\r\n trainer.save_model()\r\n # For convenience, we also re-save the tokenizer to the same directory,\r\n # so that you can share your model easily on huggingface.co/models =)\r\n if trainer.is_world_master():\r\n tokenizer.save_pretrained(training_args.output_dir)\r\n\r\n # Evaluation\r\n results = {}\r\n if training_args.do_eval:\r\n logger.info(\"*** Evaluate ***\")\r\n\r\n eval_output = trainer.evaluate()\r\n\r\n perplexity = math.exp(eval_output[\"eval_loss\"])\r\n result = {\"perplexity\": perplexity}\r\n\r\n output_eval_file = os.path.join(training_args.output_dir, \"eval_results_lm.txt\")\r\n if trainer.is_world_master():\r\n with open(output_eval_file, \"w\") as writer:\r\n logger.info(\"***** Eval results *****\")\r\n for key in sorted(result.keys()):\r\n logger.info(\" %s = %s\", key, str(result[key]))\r\n writer.write(\"%s = %s\\n\" % (key, str(result[key])))\r\n\r\n results.update(result)\r\n\r\n return results\r\n\r\n\r\ndef _mp_fn(index):\r\n # For xla_spawn (TPUs)\r\n main()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```",
"Thanks, and which type of model do you use (e.g., what's the full command you launch)?",
"```\r\npython3.7 examples/language-modeling/run_language_modeling.py --output_dir=kogpt1 --model_type=gpt2 --do_train --train_data_file=/home/ksjae/kogpt-2/data/NEWS_ARROW --overwrite_output_dir --per_device_train_batch_size=12 --per_device_eval_batch_size=8 --save_steps 10000 --num_train_epochs=1 --block_size 2048 --eval_steps 25000 --logging_steps=1000 --tokenizer_name kotok --model_name_or_path gpt2-medium --fp16\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-4.4.0-176-generic-x86_64-with-glibc2.17
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes, HF datasets
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts:
* [x] my own modified scripts: examples/language_modeling/run_language_modeling.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset
## To reproduce
Steps to reproduce the behavior:
1. Run with ```--fp16 --n_ctx 2048```
2.
```
warnings.warn('Was asked to gather along dimension 0, but all '
[W python_anomaly_mode.cpp:104] Warning: Error detected in SoftmaxBackward. Traceback of forward call that caused the error:
File "/usr/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 765, in forward
transformer_outputs = self.transformer(
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 645, in forward
outputs = block(
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 285, in forward
attn_outputs = self.attn(
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 235, in forward
attn_outputs = self._attn(query, key, value, attention_mask, head_mask, output_attentions)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 176, in _attn
w = nn.Softmax(dim=-1)(w)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 1198, in forward
return F.softmax(input, self.dim, _stacklevel=5)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1512, in softmax
ret = input.softmax(dim)
(function _print_stack)
Traceback (most recent call last):
File "examples/language-modeling/run_language_modeling.py", line 349, in <module>
main()
File "examples/language-modeling/run_language_modeling.py", line 313, in main
trainer.train(model_path=model_path)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 756, in train
tr_loss += self.training_step(model, inputs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1065, in training_step
self.scaler.scale(loss).backward()
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 130, in backward
Variable._execution_engine.run_backward(
RuntimeError: Function 'SoftmaxBackward' returned nan values in its 0th output.
0%| | 0/19506024 [00:25<?, ?it/s]
```
## Expected behavior
Not print Nan
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8221/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8221/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8220 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8220/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8220/comments | https://api.github.com/repos/huggingface/transformers/issues/8220/events | https://github.com/huggingface/transformers/issues/8220 | 733,953,625 | MDU6SXNzdWU3MzM5NTM2MjU= | 8,220 | Example for running T5 for translation | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | NONE | null | Hi,
I am having hard time finetuning T5-small on WMT-14 de/en and bleu score does go high. I followed the notebooks of question answering with T5, is there any specific point to consider for translation, so any specific parameter which needs to be given to model.generate ?
could you assist me with showing me some example codes you have made it work for translation task?
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8220/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8219 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8219/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8219/comments | https://api.github.com/repos/huggingface/transformers/issues/8219/events | https://github.com/huggingface/transformers/issues/8219 | 733,940,916 | MDU6SXNzdWU3MzM5NDA5MTY= | 8,219 | Roberta weights are not initialized loading the bare Roberta | {
"login": "ZahraAbbasiantaeb",
"id": 25108522,
"node_id": "MDQ6VXNlcjI1MTA4NTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25108522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZahraAbbasiantaeb",
"html_url": "https://github.com/ZahraAbbasiantaeb",
"followers_url": "https://api.github.com/users/ZahraAbbasiantaeb/followers",
"following_url": "https://api.github.com/users/ZahraAbbasiantaeb/following{/other_user}",
"gists_url": "https://api.github.com/users/ZahraAbbasiantaeb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZahraAbbasiantaeb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZahraAbbasiantaeb/subscriptions",
"organizations_url": "https://api.github.com/users/ZahraAbbasiantaeb/orgs",
"repos_url": "https://api.github.com/users/ZahraAbbasiantaeb/repos",
"events_url": "https://api.github.com/users/ZahraAbbasiantaeb/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZahraAbbasiantaeb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: '3.4.0'
- Platform: Colab
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [*] my own modified scripts: (give details below)
class ROBERTA(transformers.TFRobertaModel):
def __init__(self, config, *inputs, **kwargs):
super(ROBERTA, self).__init__(config, *inputs, **kwargs)
self.roberta.call = tf.function(self.roberta.call)
def build_model():
# Define inputs (token_ids, mask_ids, seg_ids)
input_size = 2 * Each_seq_length + 4
token_inputs = tf.keras.layers.Input(shape=(input_size,), name='word_inputs', dtype='int32')
# Load model and collect encodings
roberta = ROBERTA.from_pretrained('roberta-base')
print(roberta.config)
roberta_encodings = roberta(token_inputs, training=True)[0]
# Keep [CLS] token encoding
doc_encoding = tf.squeeze(roberta_encodings[:, 0:1, :], axis=1)
# Apply dropout
doc_encoding = tf.keras.layers.Dropout(0.1)(doc_encoding)
# Final output (projection) layer
# predicted_labels, log_probs = CF_model(0.5, 8)(doc_encoding)
# In the case of one layer for prediction
# outputs = tf.keras.layers.Dense(1, activation='sigmoid', name='outputs')(doc_encoding)
# Wrap-up model
model = tf.keras.models.Model(inputs=[token_inputs], outputs=[outputs])
model.compile(optimizer=tf.keras.optimizers.Adam(lr=4e-6, epsilon=1e-8), loss=tf.keras.losses.BinaryCrossentropy())
return model
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [*] my own task or dataset: (give details below)
## To reproduce
sentence-pair classification
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
previously, just the 'lm_head' layers were not initialized but now, using the same script more weights are not initialized including the encoder layers. Following error:
Some layers from the model checkpoint at roberta-base were not used when initializing ROBERTA: ['lm_head', 'encoder/layer_._3/attention/self/value/bias:0', 'encoder/layer_._10/attention/self/value/bias:0', 'encoder/layer_._10/attention/self/key/kernel:0', 'pooler/dense/bias:0', 'encoder/layer_._9/attention/self/query/kernel:0', 'encoder/layer_._10/attention/self/query/kernel:0', 'encoder/layer_._7/attention/output/dense/bias:0', 'embeddings/position_embeddings/embeddings:0', 'encoder/layer_._6/intermediate/dense/kernel:0', 'encoder/layer_._11/intermediate/dense/kernel:0', 'encoder/layer_._8/intermediate/dense/bias:0', 'encoder/layer_._10/attention/self/value/kernel:0', 'encoder/layer_._7/output/dense/bias:0', 'encoder/layer_._6/attention/self/value/bias:0', 'encoder/layer_._8/attention/output/dense/kernel:0', 'encoder/layer_._10/intermediate/dense/kernel:0', 'encoder/layer_._5/attention/self/value/kernel:0', 'encoder/layer_._6/attention/output/LayerNorm/gamma:0', 'encoder/layer_._7/attention/self/query/kernel:0', 'encoder/layer_._6/attention/self/query/kernel:0', 'encoder/layer_._6/attention/self/key/bias:0', 'encoder/layer_._8/attention/output/LayerNorm/gamma:0', 'encoder/layer_._2/output/dense/kernel:0', 'encoder/layer_._11/intermediate/dense/bias:0', 'encoder/layer_._6/output/dense/kernel:0', 'encoder/layer_._2/intermediate/dense/kernel:0', 'encoder/layer_._3/intermediate/dense/kernel:0', 'encoder/layer_._10/output/LayerNorm/beta:0', 'encoder/layer_._6/attention/self/query/bias:0', 'encoder/layer_._6/attention/output/LayerNorm/beta:0', 'encoder/layer_._9/attention/self/value/bias:0', 'encoder/layer_._8/attention/self/query/kernel:0', 'encoder/layer_._0/output/LayerNorm/gamma:0', 'encoder/layer_._11/attention/output/dense/bias:0', 'encoder/layer_._7/attention/self/value/bias:0', 'encoder/layer_._0/attention/output/dense/kernel:0', 'encoder/layer_._9/intermediate/dense/bias:0', 'encoder/layer_._2/attention/self/query/kernel:0', 'encoder/layer_._0/attention/self/key/bias:0', 'encoder/layer_._8/attention/output/LayerNorm/beta:0', 'encoder/layer_._1/attention/self/value/kernel:0', 'encoder/layer_._6/output/LayerNorm/gamma:0', 'encoder/layer_._1/attention/output/dense/bias:0', 'encoder/layer_._3/attention/self/query/bias:0', 'encoder/layer_._3/output/dense/bias:0', 'encoder/layer_._1/attention/self/key/kernel:0', 'encoder/layer_._8/attention/self/key/kernel:0', 'encoder/layer_._9/intermediate/dense/kernel:0', 'encoder/layer_._3/output/dense/kernel:0', 'encoder/layer_._2/output/LayerNorm/beta:0', 'encoder/layer_._7/attention/self/key/bias:0', 'encoder/layer_._5/attention/self/key/kernel:0', 'encoder/layer_._5/attention/self/query/bias:0', 'encoder/layer_._2/attention/output/dense/bias:0', 'encoder/layer_._4/intermediate/dense/kernel:0', 'encoder/layer_._1/intermediate/dense/bias:0', 'encoder/layer_._4/attention/self/value/kernel:0', 'encoder/layer_._11/attention/self/key/bias:0', 'encoder/layer_._5/output/dense/kernel:0', 'encoder/layer_._1/output/dense/bias:0', 'encoder/layer_._0/attention/self/value/bias:0', 'encoder/layer_._6/attention/self/key/kernel:0', 'encoder/layer_._9/attention/self/key/bias:0', 'encoder/layer_._7/output/LayerNorm/gamma:0', 'encoder/layer_._8/attention/output/dense/bias:0', 'encoder/layer_._10/attention/output/dense/bias:0', 'encoder/layer_._0/intermediate/dense/kernel:0', 'encoder/layer_._5/intermediate/dense/kernel:0', 'encoder/layer_._11/attention/self/value/kernel:0', 'encoder/layer_._8/attention/self/key/bias:0', 'encoder/layer_._8/output/dense/bias:0', 'encoder/layer_._8/intermediate/dense/kernel:0', 'encoder/layer_._7/attention/output/LayerNorm/beta:0', 'encoder/layer_._2/output/dense/bias:0', 'encoder/layer_._3/attention/output/dense/bias:0', 'encoder/layer_._0/output/dense/bias:0', 'encoder/layer_._9/attention/self/key/kernel:0', 'encoder/layer_._11/output/dense/bias:0', 'encoder/layer_._7/attention/self/query/bias:0', 'encoder/layer_._10/attention/self/key/bias:0', 'encoder/layer_._2/attention/output/dense/kernel:0', 'encoder/layer_._2/attention/self/query/bias:0', 'encoder/layer_._9/attention/output/dense/kernel:0', 'encoder/layer_._9/attention/output/LayerNorm/gamma:0', 'encoder/layer_._9/output/LayerNorm/gamma:0', 'encoder/layer_._0/attention/output/LayerNorm/beta:0', 'encoder/layer_._1/intermediate/dense/kernel:0', 'encoder/layer_._1/output/dense/kernel:0', 'encoder/layer_._1/attention/self/key/bias:0', 'encoder/layer_._2/attention/self/value/kernel:0', 'encoder/layer_._9/attention/self/value/kernel:0', 'encoder/layer_._10/intermediate/dense/bias:0', 'encoder/layer_._4/intermediate/dense/bias:0', 'encoder/layer_._6/output/LayerNorm/beta:0', 'encoder/layer_._7/output/LayerNorm/beta:0', 'encoder/layer_._11/attention/self/query/bias:0', 'encoder/layer_._0/intermediate/dense/bias:0', 'encoder/layer_._11/attention/output/dense/kernel:0', 'encoder/layer_._5/attention/self/query/kernel:0', 'encoder/layer_._8/attention/self/value/kernel:0', 'encoder/layer_._11/output/LayerNorm/beta:0', 'encoder/layer_._9/output/dense/bias:0', 'encoder/layer_._4/output/dense/bias:0', 'encoder/layer_._2/attention/self/key/bias:0', 'encoder/layer_._3/attention/self/query/kernel:0', 'encoder/layer_._4/attention/output/LayerNorm/gamma:0', 'encoder/layer_._1/attention/output/LayerNorm/beta:0', 'encoder/layer_._1/output/LayerNorm/beta:0', 'encoder/layer_._10/attention/output/LayerNorm/beta:0', 'encoder/layer_._3/attention/self/value/kernel:0', 'encoder/layer_._10/attention/self/query/bias:0', 'encoder/layer_._3/attention/self/key/bias:0', 'pooler/dense/kernel:0', 'encoder/layer_._1/attention/self/value/bias:0', 'encoder/layer_._7/attention/self/key/kernel:0', 'encoder/layer_._1/attention/output/dense/kernel:0', 'encoder/layer_._4/attention/self/key/kernel:0', 'encoder/layer_._8/output/dense/kernel:0', 'encoder/layer_._3/attention/output/LayerNorm/gamma:0', 'encoder/layer_._0/attention/self/value/kernel:0', 'encoder/layer_._3/attention/self/key/kernel:0', 'encoder/layer_._0/attention/self/query/kernel:0', 'encoder/layer_._3/intermediate/dense/bias:0', 'encoder/layer_._7/output/dense/kernel:0', 'encoder/layer_._10/output/dense/kernel:0', 'encoder/layer_._7/intermediate/dense/bias:0', 'embeddings/word_embeddings/weight:0', 'encoder/layer_._3/attention/output/LayerNorm/beta:0', 'encoder/layer_._0/attention/self/key/kernel:0', 'encoder/layer_._4/output/dense/kernel:0', 'encoder/layer_._5/output/LayerNorm/gamma:0', 'encoder/layer_._9/attention/output/dense/bias:0', 'encoder/layer_._0/attention/output/dense/bias:0', 'encoder/layer_._5/attention/output/LayerNorm/gamma:0', 'encoder/layer_._9/attention/output/LayerNorm/beta:0', 'encoder/layer_._11/output/LayerNorm/gamma:0', 'encoder/layer_._11/attention/output/LayerNorm/gamma:0', 'encoder/layer_._6/intermediate/dense/bias:0', 'encoder/layer_._2/attention/output/LayerNorm/gamma:0', 'encoder/layer_._5/output/dense/bias:0', 'encoder/layer_._0/output/dense/kernel:0', 'encoder/layer_._6/attention/output/dense/kernel:0', 'encoder/layer_._6/attention/output/dense/bias:0', 'encoder/layer_._1/attention/self/query/kernel:0', 'encoder/layer_._0/attention/self/query/bias:0', 'encoder/layer_._11/attention/self/value/bias:0', 'encoder/layer_._2/intermediate/dense/bias:0', 'embeddings/LayerNorm/beta:0', 'encoder/layer_._4/attention/output/dense/kernel:0', 'encoder/layer_._3/output/LayerNorm/beta:0', 'encoder/layer_._8/output/LayerNorm/gamma:0', 'encoder/layer_._10/attention/output/dense/kernel:0', 'encoder/layer_._11/output/dense/kernel:0', 'encoder/layer_._2/attention/output/LayerNorm/beta:0', 'encoder/layer_._7/attention/output/dense/kernel:0', 'encoder/layer_._9/attention/self/query/bias:0', 'encoder/layer_._4/attention/self/key/bias:0', 'encoder/layer_._2/output/LayerNorm/gamma:0', 'encoder/layer_._0/attention/output/LayerNorm/gamma:0', 'encoder/layer_._1/attention/output/LayerNorm/gamma:0', 'encoder/layer_._1/attention/self/query/bias:0', 'encoder/layer_._5/attention/output/LayerNorm/beta:0', 'encoder/layer_._10/output/dense/bias:0', 'encoder/layer_._8/output/LayerNorm/beta:0', 'encoder/layer_._5/output/LayerNorm/beta:0', 'embeddings/token_type_embeddings/embeddings:0', 'encoder/layer_._5/attention/output/dense/bias:0', 'encoder/layer_._4/output/LayerNorm/beta:0', 'encoder/layer_._4/attention/self/query/kernel:0', 'encoder/layer_._5/attention/output/dense/kernel:0', 'encoder/layer_._7/attention/self/value/kernel:0', 'encoder/layer_._7/intermediate/dense/kernel:0', 'encoder/layer_._11/attention/self/key/kernel:0', 'encoder/layer_._3/output/LayerNorm/gamma:0', 'encoder/layer_._10/output/LayerNorm/gamma:0', 'encoder/layer_._8/attention/self/query/bias:0', 'encoder/layer_._3/attention/output/dense/kernel:0', 'encoder/layer_._4/output/LayerNorm/gamma:0', 'encoder/layer_._10/attention/output/LayerNorm/gamma:0', 'encoder/layer_._4/attention/self/value/bias:0', 'encoder/layer_._11/attention/self/query/kernel:0', 'encoder/layer_._4/attention/output/dense/bias:0', 'encoder/layer_._4/attention/output/LayerNorm/beta:0', 'encoder/layer_._5/attention/self/key/bias:0', 'encoder/layer_._6/attention/self/value/kernel:0', 'encoder/layer_._5/attention/self/value/bias:0', 'encoder/layer_._11/attention/output/LayerNorm/beta:0', 'encoder/layer_._1/output/LayerNorm/gamma:0', 'encoder/layer_._2/attention/self/value/bias:0', 'encoder/layer_._9/output/dense/kernel:0', 'encoder/layer_._2/attention/self/key/kernel:0', 'encoder/layer_._9/output/LayerNorm/beta:0', 'encoder/layer_._7/attention/output/LayerNorm/gamma:0', 'encoder/layer_._5/intermediate/dense/bias:0', 'embeddings/LayerNorm/gamma:0', 'encoder/layer_._0/output/LayerNorm/beta:0', 'encoder/layer_._6/output/dense/bias:0', 'encoder/layer_._8/attention/self/value/bias:0', 'encoder/layer_._4/attention/self/query/bias:0']
<!-- A clear and concise description of what you would expect to happen. --> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8219/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8218 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8218/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8218/comments | https://api.github.com/repos/huggingface/transformers/issues/8218/events | https://github.com/huggingface/transformers/issues/8218 | 733,931,827 | MDU6SXNzdWU3MzM5MzE4Mjc= | 8,218 | ValueError: decoder_start_token_id or bos_token_id has to be defined for encoder-decoder generation | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\ncould you assist in adding an example in tutorial for this, I followed the tutorial sample for summarization, Is the way I made my inputs (\"Question: Question sequence \") correct? thanks \r\nhere is the tutorial sample: \r\n```\r\n>>> input_ids = tokenizer(\"summarize: studies have shown that owning a dog is good for you \", return_tensors=\"pt\").input_ids # Batch size 1\r\n>>> outputs = model.generate(input_ids)\r\n```",
"solved when I load the config from a pretrained model, maybe this helps to add the extra info needed as the default setting :) "
] | 1,604 | 1,604 | 1,604 | NONE | null | Hi
I followed the tutorial for generation with T5, and I made my input as "Question: Question sequence ", and target as "Target sequence", then I use tokenizer.batch_encode_plus to encode the sentence and then I call generate as:
model.generate(batch[input_ids], attention_mask=..., max_length=..., early_stopping=True)
I got the following error:
File "/opt/conda/envs/pl/lib/python3.7/site-packages/transformers/generation_utils.py", line 398, in generate
"decoder_start_token_id or bos_token_id has to be defined for encoder-decoder generation"
ValueError: decoder_start_token_id or bos_token_id has to be defined for encoder-decoder generation
but the input format seems to be matching the tutorial. thanks for your help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8218/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8216 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8216/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8216/comments | https://api.github.com/repos/huggingface/transformers/issues/8216/events | https://github.com/huggingface/transformers/issues/8216 | 733,915,449 | MDU6SXNzdWU3MzM5MTU0NDk= | 8,216 | tokenizer's is_split_into_words seems not work | {
"login": "HenryPaik1",
"id": 42961175,
"node_id": "MDQ6VXNlcjQyOTYxMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/42961175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HenryPaik1",
"html_url": "https://github.com/HenryPaik1",
"followers_url": "https://api.github.com/users/HenryPaik1/followers",
"following_url": "https://api.github.com/users/HenryPaik1/following{/other_user}",
"gists_url": "https://api.github.com/users/HenryPaik1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HenryPaik1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HenryPaik1/subscriptions",
"organizations_url": "https://api.github.com/users/HenryPaik1/orgs",
"repos_url": "https://api.github.com/users/HenryPaik1/repos",
"events_url": "https://api.github.com/users/HenryPaik1/events{/privacy}",
"received_events_url": "https://api.github.com/users/HenryPaik1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"wrongly posted. I delete it."
] | 1,604 | 1,604 | 1,604 | NONE | null | I input tokenized list of tokens, but it return different result(not count pad token). It seems tokenize pretokenized tokens, ignoring `is_split_into_words`. Please refer to the code below:
```
sent = "the latest investigation was authorized after the supreme court in 2007 found dcc and its founder , jim flavin , guilty of selling dcc 's ( euro ) 106 million ( then $ 130 million ) stake in fyffes after flavin -- also a fyffes director at the time -- received inside information about bad fyffes news in the pipeline ."
encoded_dict = tokenizer.encode_plus(
sent, # Sentence to encode.
add_special_tokens = False, # Add '[CLS]' and '[SEP]'
max_length = 314, # Pad & truncate all sentences.
padding = 'max_length',
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt',
return_token_type_ids=False,# Return pytorch tensors.
truncation=False,
is_split_into_words=False)
input_ids = encoded_dict['input_ids']
tokenized = tokenizer.convert_ids_to_tokens([i.item() for i in input_ids.squeeze() if i > 1])
len(tokenized)
>> 79
print(tokenized)
>> ['the', 'latest', 'investigation', 'was', 'authorized', 'after', 'the', 'supreme', 'court', 'in', '2007', 'found', 'dc', '##c', 'and', 'its', 'founder', ',', 'jim', 'fl', '##avi', '##n', ',', 'guilty', 'of', 'selling', 'dc', '##c', "'", 's', '(', 'euro', ')', '106', 'million', '(', 'then', '$', '130', 'million', ')', 'stake', 'in', 'f', '##y', '##ffe', '##s', 'after', 'fl', '##avi', '##n', '-', '-', 'also', 'a', 'f', '##y', '##ffe', '##s', 'director', 'at', 'the', 'time', '-', '-', 'received', 'inside', 'information', 'about', 'bad', 'f', '##y', '##ffe', '##s', 'news', 'in', 'the', 'pipeline', '.']
###### tokenizing pretokenized tokens as list
encoded_dict = tokenizer.encode_plus(
tokenized, # Sentence to encode.
add_special_tokens = False, # Add '[CLS]' and '[SEP]'
max_length = 314, # Pad & truncate all sentences.
padding = 'max_length',
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt',
return_token_type_ids=False,# Return pytorch tensors.
truncation=False,
is_split_into_words=True)
input_ids = encoded_dict['input_ids']
tokenized = tokenizer.convert_ids_to_tokens([i.item() for i in input_ids.squeeze() if i > 1])
len(tokenized)
>> 114 # it should be 79
print(tokenized)
>> ['the', 'latest', 'investigation', 'was', 'authorized', 'after', 'the', 'supreme', 'court', 'in', '2007', 'found', 'dc', '#', '#', 'c', 'and', 'its', 'founder', ',', 'jim', 'fl', '#', '#', 'av', '##i', '#', '#', 'n', ',', 'guilty', 'of', 'selling', 'dc', '#', '#', 'c', "'", 's', '(', 'euro', ')', '106', 'million', '(', 'then', '$', '130', 'million', ')', 'stake', 'in', 'f', '#', '#', 'y', '#', '#', 'ff', '##e', '#', '#', 's', 'after', 'fl', '#', '#', 'av', '##i', '#', '#', 'n', '-', '-', 'also', 'a', 'f', '#', '#', 'y', '#', '#', 'ff', '##e', '#', '#', 's', 'director', 'at', 'the', 'time', '-', '-', 'received', 'inside', 'information', 'about', 'bad', 'f', '#', '#', 'y', '#', '#', 'ff', '##e', '#', '#', 's', 'news', 'in', 'the', 'pipeline', '.']
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8216/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8215 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8215/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8215/comments | https://api.github.com/repos/huggingface/transformers/issues/8215/events | https://github.com/huggingface/transformers/issues/8215 | 733,913,568 | MDU6SXNzdWU3MzM5MTM1Njg= | 8,215 | Setting os.environ['CUDA_VISIBLE_DEVICES'] = ‘1’, but always training on GPU0, how to set it(GPT2)? | {
"login": "TheoRenLi",
"id": 50821257,
"node_id": "MDQ6VXNlcjUwODIxMjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/50821257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheoRenLi",
"html_url": "https://github.com/TheoRenLi",
"followers_url": "https://api.github.com/users/TheoRenLi/followers",
"following_url": "https://api.github.com/users/TheoRenLi/following{/other_user}",
"gists_url": "https://api.github.com/users/TheoRenLi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheoRenLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheoRenLi/subscriptions",
"organizations_url": "https://api.github.com/users/TheoRenLi/orgs",
"repos_url": "https://api.github.com/users/TheoRenLi/repos",
"events_url": "https://api.github.com/users/TheoRenLi/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheoRenLi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @TheoRenLi, this question should go on https://discuss.huggingface.co\r\n\r\nWe keep the issues of the repo for bug and features request (with clear descriptions).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8215/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8217 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8217/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8217/comments | https://api.github.com/repos/huggingface/transformers/issues/8217/events | https://github.com/huggingface/transformers/issues/8217 | 733,919,619 | MDU6SXNzdWU3MzM5MTk2MTk= | 8,217 | tokenizer "is_split_into_words" seems not work | {
"login": "HenryPaik1",
"id": 42961175,
"node_id": "MDQ6VXNlcjQyOTYxMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/42961175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HenryPaik1",
"html_url": "https://github.com/HenryPaik1",
"followers_url": "https://api.github.com/users/HenryPaik1/followers",
"following_url": "https://api.github.com/users/HenryPaik1/following{/other_user}",
"gists_url": "https://api.github.com/users/HenryPaik1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HenryPaik1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HenryPaik1/subscriptions",
"organizations_url": "https://api.github.com/users/HenryPaik1/orgs",
"repos_url": "https://api.github.com/users/HenryPaik1/repos",
"events_url": "https://api.github.com/users/HenryPaik1/events{/privacy}",
"received_events_url": "https://api.github.com/users/HenryPaik1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"the same issue, is there any workaround?",
"same issue, I think there is a bug in PreTrainedTokenizer class\r\n```\r\n def get_input_ids(text):\r\n print(text)\r\n if isinstance(text, str):\r\n tokens = self.tokenize(text, **kwargs)\r\n return self.convert_tokens_to_ids(tokens)\r\n elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str):\r\n if is_split_into_words:\r\n tokens = list(\r\n itertools.chain(*(self.tokenize(t, is_split_into_words=True, **kwargs) for t in text))\r\n )\r\n return self.convert_tokens_to_ids(tokens)\r\n else:\r\n return self.convert_tokens_to_ids(text)\r\n```\r\nin `if is_split_into_words` case (where the input is pretokenized words), the tokenizer should directly return ids.",
"Hello! I think all of the confusion here may be because you're expecting `is_split_into_words` to understand that the text was already pre-tokenized. This is not the case, it means that the string was split into words (not tokens), i.e., split on spaces.\r\n\r\n@HenryPaik1, in your example, your list of words is the following:\r\n```py\r\n['the', 'latest', 'investigation', 'was', 'authorized', 'after', 'the', 'supreme', 'court', 'in', '2007', 'found', 'dc', '##c', 'and', 'its', 'founder', ',', 'jim', 'fl', '##avi', '##n', ',', 'guilty', 'of', 'selling', 'dc', '##c', \"'\", 's', '(', 'euro', ')', '106', 'million', '(', 'then', '$', '130', 'million', ')', 'stake', 'in', 'f', '##y', '##ffe', '##s', 'after', 'fl', '##avi', '##n', '-', '-', 'also', 'a', 'f', '##y', '##ffe', '##s', 'director', 'at', 'the', 'time', '-', '-', 'received', 'inside', 'information', 'about', 'bad', 'f', '##y', '##ffe', '##s', 'news', 'in', 'the', 'pipeline', '.']\r\n```\r\nSome of these strings are tokens, but not words. Running the encoding method on it once again means that you're re-tokenizing some of these tokens.\r\n\r\nYou can see it is the case, as the following token:\r\n```py\r\n [..., '##c', ...]\r\n```\r\nbecame:\r\n```py\r\n[..., '#', '#', 'c', ...]\r\n```\r\n\r\nI think in your case you're looking for the method `convert_tokens_to_ids`: your sequence is already tokenized, you only need the IDs. If you're looking to use `encode_plus` because you need padding/trunc/conversion to tensors, etc., then you can simply use it without specifying that the sequence is separated into words. Please be aware that the following code only works on python tokenizers, i.e., slow tokenizers.\r\n\r\n```py\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\nsent = \"the latest investigation was authorized after the supreme court in 2007 found dcc and its founder , jim flavin , guilty of selling dcc 's ( euro ) 106 million ( then $ 130 million ) stake in fyffes after flavin -- also a fyffes director at the time -- received inside information about bad fyffes news in the pipeline .\"\r\n\r\nencoded_dict = tokenizer.encode_plus(\r\n sent, # Sentence to encode.\r\n add_special_tokens = False, # Add '[CLS]' and '[SEP]'\r\n max_length = 314, # Pad & truncate all sentences.\r\n padding = 'max_length',\r\n return_attention_mask = True, # Construct attn. masks.\r\n return_tensors = 'pt',\r\n truncation=False,\r\n is_split_into_words=False)\r\ninput_ids = encoded_dict['input_ids']\r\ntokenized = tokenizer.convert_ids_to_tokens([i.item() for i in input_ids.squeeze() if i > 1])\r\nprint(len(tokenized))\r\n#80 \r\n\r\n###### tokenizing pretokenized tokens as list\r\nencoded_dict = tokenizer.encode_plus(\r\n tokenized, # Sentence to encode.\r\n add_special_tokens = False, # Add '[CLS]' and '[SEP]'\r\n max_length = 314, # Pad & truncate all sentences.\r\n padding = 'max_length',\r\n return_attention_mask = True, # Construct attn. masks.\r\n return_tensors = 'pt',\r\n truncation=False,\r\n )\r\n\r\ninput_ids = encoded_dict['input_ids']\r\ntokenized = tokenizer.convert_ids_to_tokens([i.item() for i in input_ids.squeeze() if i > 1])\r\nprint(len(tokenized))\r\n# 80\r\n```",
"@LysandreJik Thanks for your explanation. Yes, I want to use `encode_plus` for padding/trunc. It looks I thought the argument, `is_split_into_words`, the other way around. `is_split_into_words=True` seems for the \"not tokenized sentence.\" \r\nAnd if I understand correctly, you mean the part below is executed by python:\r\n```\r\ndef get_input_ids(text):\r\n if isinstance(text, str):\r\n tokens = self.tokenize(text, **kwargs)\r\n return self.convert_tokens_to_ids(tokens)\r\n elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str):\r\n if is_split_into_words:\r\n ####### this part ########\r\n tokens = list(\r\n itertools.chain(*(self.tokenize(t, is_split_into_words=True, **kwargs) for t in text))\r\n )\r\n ####### this part ########\r\n return self.convert_tokens_to_ids(tokens)\r\n else:\r\n return self.convert_tokens_to_ids(text)\r\n```",
"The part you've highlighted is performing tokenization on each individual word (not token!). You can see here that if it was already tokenized, then applying a second tokenization would be incorrect.",
"@LysandreJik Understood, Thanks. I close the issue.",
"I think the tokenizer should support a new kwarg such as:\r\n` is_already_tokens=False/True`"
] | 1,604 | 1,649 | 1,606 | NONE | null | I input tokenized list of tokens, but it return different result(not count pad token). It seems tokenize pretokenized tokens, ignoring `is_split_into_words`. Please refer to the code below:
```
sent = "the latest investigation was authorized after the supreme court in 2007 found dcc and its founder , jim flavin , guilty of selling dcc 's ( euro ) 106 million ( then $ 130 million ) stake in fyffes after flavin -- also a fyffes director at the time -- received inside information about bad fyffes news in the pipeline ."
encoded_dict = tokenizer.encode_plus(
sent, # Sentence to encode.
add_special_tokens = False, # Add '[CLS]' and '[SEP]'
max_length = 314, # Pad & truncate all sentences.
padding = 'max_length',
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt',
return_token_type_ids=False,# Return pytorch tensors.
truncation=False,
is_split_into_words=False)
input_ids = encoded_dict['input_ids']
tokenized = tokenizer.convert_ids_to_tokens([i.item() for i in input_ids.squeeze() if i > 1])
len(tokenized)
>> 79
print(tokenized)
>> ['the', 'latest', 'investigation', 'was', 'authorized', 'after', 'the', 'supreme', 'court', 'in', '2007', 'found', 'dc', '##c', 'and', 'its', 'founder', ',', 'jim', 'fl', '##avi', '##n', ',', 'guilty', 'of', 'selling', 'dc', '##c', "'", 's', '(', 'euro', ')', '106', 'million', '(', 'then', '$', '130', 'million', ')', 'stake', 'in', 'f', '##y', '##ffe', '##s', 'after', 'fl', '##avi', '##n', '-', '-', 'also', 'a', 'f', '##y', '##ffe', '##s', 'director', 'at', 'the', 'time', '-', '-', 'received', 'inside', 'information', 'about', 'bad', 'f', '##y', '##ffe', '##s', 'news', 'in', 'the', 'pipeline', '.']
###### tokenizing pretokenized tokens as list
encoded_dict = tokenizer.encode_plus(
tokenized, # Sentence to encode.
add_special_tokens = False, # Add '[CLS]' and '[SEP]'
max_length = 314, # Pad & truncate all sentences.
padding = 'max_length',
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt',
return_token_type_ids=False,# Return pytorch tensors.
truncation=False,
is_split_into_words=True)
input_ids = encoded_dict['input_ids']
tokenized = tokenizer.convert_ids_to_tokens([i.item() for i in input_ids.squeeze() if i > 1])
len(tokenized)
>> 114 # it should be 79
print(tokenized)
>> ['the', 'latest', 'investigation', 'was', 'authorized', 'after', 'the', 'supreme', 'court', 'in', '2007', 'found', 'dc', '#', '#', 'c', 'and', 'its', 'founder', ',', 'jim', 'fl', '#', '#', 'av', '##i', '#', '#', 'n', ',', 'guilty', 'of', 'selling', 'dc', '#', '#', 'c', "'", 's', '(', 'euro', ')', '106', 'million', '(', 'then', '$', '130', 'million', ')', 'stake', 'in', 'f', '#', '#', 'y', '#', '#', 'ff', '##e', '#', '#', 's', 'after', 'fl', '#', '#', 'av', '##i', '#', '#', 'n', '-', '-', 'also', 'a', 'f', '#', '#', 'y', '#', '#', 'ff', '##e', '#', '#', 's', 'director', 'at', 'the', 'time', '-', '-', 'received', 'inside', 'information', 'about', 'bad', 'f', '#', '#', 'y', '#', '#', 'ff', '##e', '#', '#', 's', 'news', 'in', 'the', 'pipeline', '.']
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8217/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8214 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8214/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8214/comments | https://api.github.com/repos/huggingface/transformers/issues/8214/events | https://github.com/huggingface/transformers/issues/8214 | 733,870,808 | MDU6SXNzdWU3MzM4NzA4MDg= | 8,214 | [Benchmark] | {
"login": "Debraroberts1975",
"id": 73767845,
"node_id": "MDQ6VXNlcjczNzY3ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/73767845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Debraroberts1975",
"html_url": "https://github.com/Debraroberts1975",
"followers_url": "https://api.github.com/users/Debraroberts1975/followers",
"following_url": "https://api.github.com/users/Debraroberts1975/following{/other_user}",
"gists_url": "https://api.github.com/users/Debraroberts1975/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Debraroberts1975/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Debraroberts1975/subscriptions",
"organizations_url": "https://api.github.com/users/Debraroberts1975/orgs",
"repos_url": "https://api.github.com/users/Debraroberts1975/repos",
"events_url": "https://api.github.com/users/Debraroberts1975/events{/privacy}",
"received_events_url": "https://api.github.com/users/Debraroberts1975/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8214/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8213 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8213/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8213/comments | https://api.github.com/repos/huggingface/transformers/issues/8213/events | https://github.com/huggingface/transformers/pull/8213 | 733,835,108 | MDExOlB1bGxSZXF1ZXN0NTEzNTE4MDUy | 8,213 | Fix ignore files behavior in doctests | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,604 | 1,606 | 1,604 | CONTRIBUTOR | null | In the doc tests (which btw I'm aware they're disabled), the `ignore_files` uses a mutable default value, so it's modified (e.g., when `__init__.py` is appended), it modifies the value for the next function calls that also use the default value (that don't set the arg).
I also fixed typing issues in the file and other minor issues. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8213/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8213",
"html_url": "https://github.com/huggingface/transformers/pull/8213",
"diff_url": "https://github.com/huggingface/transformers/pull/8213.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8213.patch",
"merged_at": 1604324857000
} |
https://api.github.com/repos/huggingface/transformers/issues/8212 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8212/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8212/comments | https://api.github.com/repos/huggingface/transformers/issues/8212/events | https://github.com/huggingface/transformers/issues/8212 | 733,804,394 | MDU6SXNzdWU3MzM4MDQzOTQ= | 8,212 | Pickle error | {
"login": "naturecreator",
"id": 39854185,
"node_id": "MDQ6VXNlcjM5ODU0MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39854185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naturecreator",
"html_url": "https://github.com/naturecreator",
"followers_url": "https://api.github.com/users/naturecreator/followers",
"following_url": "https://api.github.com/users/naturecreator/following{/other_user}",
"gists_url": "https://api.github.com/users/naturecreator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naturecreator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naturecreator/subscriptions",
"organizations_url": "https://api.github.com/users/naturecreator/orgs",
"repos_url": "https://api.github.com/users/naturecreator/repos",
"events_url": "https://api.github.com/users/naturecreator/events{/privacy}",
"received_events_url": "https://api.github.com/users/naturecreator/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you give all the information you can about your environment and pip list and I’m pinging @lhoestq.\r\n\r\nIf you can manage to reproduce the error in a google colab or shareable environment that would be the top for debugging.",
"@VictorSanh got a similar issue once. Did you install transformers using `pip install -e .` ?",
"> Can you give all the information you can about your environment and pip list and I’m pinging @lhoestq.\r\n> \r\n> If you can manage to reproduce the error in a google colab or shareable environment that would be the top for debugging.\r\n\r\n@thomwolf Please have a look at the [colab](https://colab.research.google.com/drive/1BlQF0-JYBVNsZXuIQsyVSGRQzsQzp0pl?usp=sharing). It is also reproducing the same error as before.",
"> @VictorSanh got a similar issue once. Did you install transformers using `pip install -e .` ?\r\n\r\n@lhoestq Yes, I have installed it from source using:\r\n```\r\ngit clone https://github.com/huggingface/transformers.git\r\ncd transformers\r\npip install -e .\r\n```\r\n\r\nI also tried installing as suggested here in [examples](https://github.com/huggingface/transformers/tree/master/examples) as:\r\n\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install .\r\npip install -r ./examples/requirements.txt\r\n```",
"I am trying to train roberta model from scratch using run_mlm.py file. But, facing the same issue.\r\n\r\n> Didn't find file ./model_output/tokenizer.json. We won't load it.\r\n> Didn't find file ./model_output/added_tokens.json. We won't load it.\r\n> Didn't find file ./model_output/special_tokens_map.json. We won't load it.\r\n> Didn't find file ./model_output/tokenizer_config.json. We won't load it.\r\n> loading file ./model_output/vocab.json\r\n> loading file ./model_output/merges.txt\r\n> loading file None\r\n> loading file None\r\n> loading file None\r\n> loading file None\r\n> Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Defaultto no truncation.\r\n> Traceback (most recent call last):\r\n> File \"transformers/examples/language-modeling/run_mlm.py\", line 310, in <module>\r\n> main()\r\n> File \"transformers/examples/language-modeling/run_mlm.py\", line 259, in main\r\n> load_from_cache_file=not data_args.overwrite_cache,\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/datasets/dataset_dict.py\", line 300, in map\r\n> for k, dataset in self.items()\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/datasets/dataset_dict.py\", line 300, in <dictcomp>\r\n> for k, dataset in self.items()\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1256, in map\r\n> update_data=update_data,\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 156, in wrapper\r\n> out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/datasets/fingerprint.py\", line 158, in wrapper\r\n> self._fingerprint, transform, kwargs_for_fingerprint\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/datasets/fingerprint.py\", line 105, in update_fingerprint\r\n> hasher.update(transform_args[key])\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/datasets/fingerprint.py\", line 57, in update\r\n> self.m.update(self.hash(value).encode(\"utf-8\"))\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/datasets/fingerprint.py\", line 53, in hash\r\n> return cls.hash_default(value)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/datasets/fingerprint.py\", line 46, in hash_default\r\n> return cls.hash_bytes(dumps(value))\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/datasets/utils/py_utils.py\", line 367, in dumps\r\n> dump(obj, file)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/datasets/utils/py_utils.py\", line 339, in dump\r\n> Pickler(file, recurse=True).dump(obj)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/dill/_dill.py\", line 446, in dump\r\n> StockPickler.dump(self, obj)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 409, in dump\r\n> self.save(obj)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 476, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/dill/_dill.py\", line 1438, in save_function\r\n> obj.__dict__, fkwdefaults), obj=obj)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 610, in save_reduce\r\n> save(args)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 476, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 751, in save_tuple\r\n> save(element)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 476, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 736, in save_tuple\r\n> save(element)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 476, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/dill/_dill.py\", line 1170, in save_cell\r\n> pickler.save_reduce(_create_cell, (f,), obj=obj)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 610, in save_reduce\r\n> save(args)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 476, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 736, in save_tuple\r\n> save(element)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 521, in save\r\n> self.save_reduce(obj=obj, *rv)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 605, in save_reduce\r\n> save(cls)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 476, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/dill/_dill.py\", line 1365, in save_type\r\n> obj.__bases__, _dict), obj=obj)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 610, in save_reduce\r\n> save(args)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 476, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 751, in save_tuple\r\n> save(element)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 476, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/dill/_dill.py\", line 933, in save_module_dict\r\n> StockPickler.save_dict(pickler, obj)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 821, in save_dict\r\n> self._batch_setitems(obj.items())\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 847, in _batch_setitems\r\n> save(v)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 476, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/site-packages/dill/_dill.py\", line 933, in save_module_dict\r\n> StockPickler.save_dict(pickler, obj)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 821, in save_dict\r\n> self._batch_setitems(obj.items())\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 847, in _batch_setitems\r\n> save(v)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 507, in save\r\n> self.save_global(obj, rv)\r\n> File \"/anaconda/envs/azureml_py36/lib/python3.6/pickle.py\", line 927, in save_global\r\n> (obj, module_name, name))\r\n> _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union",
"I have the same problem, how to fix it?",
"\r\n\r\n\r\n> > @VictorSanh got a similar issue once. Did you install transformers using `pip install -e .` ?\r\n> \r\n> @lhoestq Yes, I have installed it from source using:\r\n> \r\n> ```\r\n> git clone https://github.com/huggingface/transformers.git\r\n> cd transformers\r\n> pip install -e .\r\n> ```\r\n> \r\n> I also tried installing as suggested here in [examples](https://github.com/huggingface/transformers/tree/master/examples) as:\r\n> \r\n> ```\r\n> git clone https://github.com/huggingface/transformers\r\n> cd transformers\r\n> pip install .\r\n> pip install -r ./examples/requirements.txt\r\n> ```\r\n\r\nYes @naturecreator, I had the same error last week. I managed to circumvent that by removing the editable mode when pip installing (from `pip install -e .` to a standard `pip install .`).\r\nIt worked for me both for python 3.6 and 3.7.",
"After the recent commit made to the [script](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py), it is running as expected without any errors.",
"Hey, I forked it and followed the solution given by @VictorSanh, but I am still getting this error. I am loading a custom dataset (text file) not a predefined one, and for Roberta-Base. Also using the --line_by_line parameter.Any ideas why this may be happening?",
"Tried by removing --line_by_line parameter. It works, but it is not taking line by line input anymore since we removed the parameter. I processed the file as a JSON for now. Is there a fix using the --line_by_line?",
"This an error that none of us on the team managed to fully reproduce, so if you could give us your full environment, that would be super helpful.",
"I would love to help. I am a bit new to this, do let me know if any more specifics are required. The versions of the required lib/lang are - \r\nPython - 3.6.7\r\ntransformers - 3.4.0\r\npickle - 4.0\r\n\r\nThe command I ran was -\r\npython3 run_mlm.py \\\r\n--model_name_or_path roberta-base \\ \r\n--train_file train.txt \\\r\n--validation_file test.txt \\ \r\n--do_train \\\r\n--do_eval \\\r\n--output_dir results/ \\ \r\n--line_by_line \\\r\n",
"Ahah! Can reproduce! This will make investigation easier.",
"For future reference, here is how I create an env reproducing the bug, and the command that shows it (self-contained to the repo):\r\n```\r\npyenv install 3.6.7\r\npyenv virtualenv 3.6.7 picklebug\r\npyenv activate picklebug\r\npip install --upgrade pip\r\npip install transformers[torch]\r\npip install datasets\r\ncd git/transformers # Adapt to your local path to the cloned repo\r\npip install -e .\r\npython examples/language-modeling/run_mlm.py \\\r\n--model_name_or_path roberta-base \\\r\n--train_file ./tests/fixtures/sample_text.txt \\\r\n--validation_file ./tests/fixtures/sample_text.txt \\\r\n--do_train \\\r\n--do_eval \\\r\n--output_dir /tmp/test=clm \\\r\n--line_by_line\r\n```",
"The bug disappears for me with python 3.7.9 so if you can upgrade your python version, you should be good to go.",
"Further reduced, the bug appears in all python versions <= 3.6.12 but disappears in python 3.7.0.",
"Thanks, this was really helpful !!!"
] | 1,604 | 1,604 | 1,604 | NONE | null | While fine-tuning BERT with the new [script](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) I am facing the issue as follows:
```
Traceback (most recent call last):
File "run_mlm.py", line 310, in <module>
main()
File "run_mlm.py", line 259, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/datasets/dataset_dict.py", line 300, in map
for k, dataset in self.items()
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/datasets/dataset_dict.py", line 300, in <dictcomp>
for k, dataset in self.items()
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1256, in map
update_data=update_data,
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 367, in dumps
dump(obj, file)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 339, in dump
Pickler(file, recurse=True).dump(obj)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/dill/_dill.py", line 446, in dump
StockPickler.dump(self, obj)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 409, in dump
self.save(obj)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/dill/_dill.py", line 1438, in save_function
obj.__dict__, fkwdefaults), obj=obj)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/dill/_dill.py", line 1170, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 605, in save_reduce
save(cls)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/dill/_dill.py", line 1365, in save_type
obj.__bases__, _dict), obj=obj)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 507, in save
self.save_global(obj, rv)
File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/pickle.py", line 927, in save_global
(obj, module_name, name))
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union
```
I am trying to run the same script with the already mentioned wikitext dataset. However, I am not able to run it successfully due to the above mentioned error.
@sgugger Could you please help me resolve this error?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8212/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8211 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8211/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8211/comments | https://api.github.com/repos/huggingface/transformers/issues/8211/events | https://github.com/huggingface/transformers/issues/8211 | 733,789,766 | MDU6SXNzdWU3MzM3ODk3NjY= | 8,211 | Appropriate dataset format for language modeling example | {
"login": "arccoxx",
"id": 2307341,
"node_id": "MDQ6VXNlcjIzMDczNDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2307341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arccoxx",
"html_url": "https://github.com/arccoxx",
"followers_url": "https://api.github.com/users/arccoxx/followers",
"following_url": "https://api.github.com/users/arccoxx/following{/other_user}",
"gists_url": "https://api.github.com/users/arccoxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arccoxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arccoxx/subscriptions",
"organizations_url": "https://api.github.com/users/arccoxx/orgs",
"repos_url": "https://api.github.com/users/arccoxx/repos",
"events_url": "https://api.github.com/users/arccoxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/arccoxx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"General questions should be asked on the [forum](https://discuss.huggingface.co/) as we keep the issues for bugs.\r\n`run_clm` doesn't use the approach of taking the different lines of a dataset (that's in `run_mlm` as it's usually done for masked language modeling) so you will have to tweak the example script to do this.\r\n\r\nThere is no way to pretokenzie, but the result of the tokenization will be cached, so it will only be run once on a given machine.",
"Thank you for the help! I won't make that mistake again my apologies. Editing the clm file now. New to open source software and excited to dig in and help please bear with me while I learn the ropes!",
"No worries, I'm just telling you for next time :-)\r\nGood luck with your scripting, closing this issue for now."
] | 1,604 | 1,604 | 1,604 | NONE | null | # What is the most memory efficient way/best way to format your dataset file for language modeling?
## Details
I am running run_clm.py and can only get my dataset to work with the smallest GPT2 model. I would like to experiment with gpt2-xl ideally but would settle for large and xlnet. I am using distributed training on TPU assuming that this improves memory. I have saved my file in a .txt. It is roughly 20mb with 82,000 samples of mean length 256 std 250 and is line delimited (where each line signifies a sample).
Is this the correct approach? I notice the .raw files used in training, are these smaller? Is there a way to pretokenize?
I hesitate to ask this on stack as it is not a bug.
@sgugger sorry to bother if this is a common issue I'd love to hear more no worries if it will require much work I can focus on that. Just lost for resources.
Thank you all! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8211/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8210 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8210/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8210/comments | https://api.github.com/repos/huggingface/transformers/issues/8210/events | https://github.com/huggingface/transformers/issues/8210 | 733,782,398 | MDU6SXNzdWU3MzM3ODIzOTg= | 8,210 | Simple import issue for run_clm.py | {
"login": "arccoxx",
"id": 2307341,
"node_id": "MDQ6VXNlcjIzMDczNDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2307341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arccoxx",
"html_url": "https://github.com/arccoxx",
"followers_url": "https://api.github.com/users/arccoxx/followers",
"following_url": "https://api.github.com/users/arccoxx/following{/other_user}",
"gists_url": "https://api.github.com/users/arccoxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arccoxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arccoxx/subscriptions",
"organizations_url": "https://api.github.com/users/arccoxx/orgs",
"repos_url": "https://api.github.com/users/arccoxx/repos",
"events_url": "https://api.github.com/users/arccoxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/arccoxx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there, as mentioned in the README of the examples (in bold), you need to install transformers [from source](https://huggingface.co/transformers/installation.html#installing-from-source) to use that script.",
"I did thank, you for the rapid reply! Comfortable programmer, files not so much so again thank you for bearing with me. Installed via:\r\n\r\n!pip install git+https://github.com/huggingface/transformers.git",
"Yep thats the issue sorry for being a moron and this can be closed. Perhaps someone of multicellular intelligence could explain why my !pip git+ solution is insufficient?\r\n\r\nThank you @sgugger",
"Mmmm, maybe you had the repo cached somewhere and it didn't update to the latest version? Glad your issue is fixed :-) ",
"Maybe because you already had `transformers` installed in your environment, in which case you would have to supply the `-U` option to update to the specific version you're targeting ",
"I have a issue of fine-tuning the T5 model(t5-base). \r\n\r\n!python /content/transformers/examples/language-modeling/run_clm.py\r\n--model_name_or_path t5-base\r\n--train_file /content/train.txt\r\n--do_train\r\n--output_dir /tmp/test-clm\r\n\r\nThis code is not working. How can I fine-tune the t5-base model?"
] | 1,604 | 1,626 | 1,604 | NONE | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
other details na on colab not sure how to find them sorry
### Who can help
@sgugger @TevenLeScao @LysandreJik
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
run_clm.py
* [ ] my own modified scripts: (give details below)
na
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
language modeling for generation with custom data file
* [ ] my own task or dataset: (give details below)
my own dataset in a file train-4.txt which is line delimited
## To reproduce
Steps to reproduce the behavior:
1.run this script:
2. !python /content/transformers/examples/language-modeling/run_clm.py \
--model_name_or_path gpt2-medium \
--train_file /content/train-4.txt \
--do_train \
--output_dir /tmp/test-clm
This is the script:
!python /content/transformers/examples/language-modeling/run_clm.py \
--model_name_or_path gpt2-medium \
--train_file /content/train-4.txt \
--do_train \
--output_dir /tmp/test-clm
this is the error:
ImportError: cannot import name 'is_main_process'
## Expected behavior
working language modeling script
'is_main_process' should be importable from transformers.trainer.utils
Thank you!
Will be using this model down stream for some more advanced tasks trying to blow through finetuning so I can get a jump on the fun stuff. Any help would be appreciated, working on an academic art project and fast help would be deeply appreciated (no rush obviously) so creative iteration may commence. Thank you once again!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8210/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8209 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8209/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8209/comments | https://api.github.com/repos/huggingface/transformers/issues/8209/events | https://github.com/huggingface/transformers/issues/8209 | 733,723,156 | MDU6SXNzdWU3MzM3MjMxNTY= | 8,209 | XLMRobertaTokenizer potential bug | {
"login": "arahusky",
"id": 5543788,
"node_id": "MDQ6VXNlcjU1NDM3ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5543788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arahusky",
"html_url": "https://github.com/arahusky",
"followers_url": "https://api.github.com/users/arahusky/followers",
"following_url": "https://api.github.com/users/arahusky/following{/other_user}",
"gists_url": "https://api.github.com/users/arahusky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arahusky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arahusky/subscriptions",
"organizations_url": "https://api.github.com/users/arahusky/orgs",
"repos_url": "https://api.github.com/users/arahusky/repos",
"events_url": "https://api.github.com/users/arahusky/events{/privacy}",
"received_events_url": "https://api.github.com/users/arahusky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
}
] | [
"Indeed this seems a bit strange.\r\n\r\nPining @n1t0 and @Narsil here (actually this should probably rather be an issue in the https://github.com/huggingface/tokenizers repo)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@n1t0 Any update on this?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,604 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-5.4.60-1-pve-x86_64-with-debian-buster-sid
- Python version: 3.6.3
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
@mfuntowicz
## Information
I need to align tokens obtained with XLMRobertaTokenizer to original text. Normally, I use fast tokenizer API and token_to_chars method to get the mapping. However, when I use it with XLMRobertaTokenizer, there seems to be bug in the output (the returned indices of tokens tend to skip some tokens and sometimes do not correspond).
## To reproduce
```
from transformers import AutoTokenizer
input='walnut , 17.6.2007 22:20:59 , ip : *** . ***.108.25 , # 10305dobry den , nedavno me znicehonic zacal bolet nart prave nohy-spatne se s nim pohybuje , boli me pri chuzi .'
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base', use_fast = True)
res = tokenizer(input_line)
print(res.tokens()[:5]
```
```
['<s>', '▁wal', 'nut', '▁', ',']
```
```
for i in range(15):
cur_char_word_ind = res.char_to_token(i)
print(input_line[i], cur_char_word_ind)
```
```
w 1
a 1
l 1
n 2
u 2
t 2
None
, 3
None
1 5
7 5
. 6
6 6
. 6
2 7
```
The line ```, 3 ``` is wrong as ```,``` should be aligned to fourth token ```,```. The fourth token is not used and is skipped.
## Expected behavior
Possibly
```
w 1
a 1
l 1
n 2
u 2
t 2
3
, 4
None
1 5
7 5
. 6
6 6
. 6
2 7
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8209/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8208 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8208/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8208/comments | https://api.github.com/repos/huggingface/transformers/issues/8208/events | https://github.com/huggingface/transformers/issues/8208 | 733,701,576 | MDU6SXNzdWU3MzM3MDE1NzY= | 8,208 | Poor f1 score when validating existing models | {
"login": "omrishsu",
"id": 7582428,
"node_id": "MDQ6VXNlcjc1ODI0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7582428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omrishsu",
"html_url": "https://github.com/omrishsu",
"followers_url": "https://api.github.com/users/omrishsu/followers",
"following_url": "https://api.github.com/users/omrishsu/following{/other_user}",
"gists_url": "https://api.github.com/users/omrishsu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omrishsu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omrishsu/subscriptions",
"organizations_url": "https://api.github.com/users/omrishsu/orgs",
"repos_url": "https://api.github.com/users/omrishsu/repos",
"events_url": "https://api.github.com/users/omrishsu/events{/privacy}",
"received_events_url": "https://api.github.com/users/omrishsu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | NONE | null | I have been attempting to validate the results on several models, but have been unable to match them with the results posted on the model cards.
For example, I took an Albert model: https://huggingface.co/ahotrod/albert_xxlargev1_squad2_512
and I run:
```
python run_squad.py
--model_type albert
--model_name_or_path ahotrod/albert_xxlargev1_squad2_512
--do_eval
--predict_file ../squd/dev-v2.0.json
--per_gpu_eval_batch_size 8
--max_seq_length 512
--doc_stride 128
--output_dir ../squd/output/albert
--overwrite_output_dir
--threads 16
--verbose
--version_2_with_negative
```
I got:
```
exact: 77.39408742525058
f1: 81.6576936707378
total: 11873
HasAns_exact: 71.60931174089069
HasAns_f1': 80.14875117285251
HasAns_total': 5928
NoAns_exact': 83.16232127838519
NoAns_f1': 83.16232127838519
NoAns_total': 5945
best_exact': 77.38566495409754
best_exact_thresh': 0.0
best_f1': 81.64927119958456
best_f1_thresh': 0.0
```
While on the model card stated:
```
exact: 86.11134506864315
f1: 89.35371214945009
total': 11873
HasAns_exact': 83.56950067476383
HasAns_f1': 90.06353312254078
HasAns_total': 5928
NoAns_exact': 88.64592094196804
NoAns_f1': 88.64592094196804
NoAns_total': 5945
best_exact': 86.11134506864315
best_exact_thresh': 0.0
best_f1': 89.35371214944985
best_f1_thresh': 0.0
```
I did not alter any code, I was simply trying to validate the results. What am I overlooking here?
(the same diff also found using roberta models, bert, and others) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8208/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8207 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8207/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8207/comments | https://api.github.com/repos/huggingface/transformers/issues/8207/events | https://github.com/huggingface/transformers/pull/8207 | 733,687,432 | MDExOlB1bGxSZXF1ZXN0NTEzNDA5MDgz | 8,207 | Updated ConversationalPipeline to work with encoder-decoder models | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Looks great! Can we maybe add one test with the small `BlenderbotModel` to `/home/patrick/hugging_face/transformers/tests/test_pipelines_conversational.py` ?\r\n\r\nThank you @patrickvonplaten , I added an integration test using BlenderBot 90M.",
"Merging, unrelated failure."
] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
This PR extends the capabilities of the existing ConversationalPipeline to work with Encoder Decoder models (such as BlenderBot).
The pipeline has been modified as follows:
- history generated from concatenation of the inputs to the generated tokens for encoder-decoder (decoders use directly the generated tokens that contain the initial prompt)
- updated the cut-off position for generated tokens (1 for encoder-decoders, `input_length` for decoders)
- updated the clean-up script to remove all `pad_tokens` if `pad_token` != `eos_token`. Otherwise, remove pad_tokens from the second found if `pad_token` and `eos_token` are identical (previous behaviour of the pipeline). This is needed otherwise the models with a specific `eos_token` will keep an unnecessary `pad_token` that affects generation quality for subsequent rounds.
This has been tested with the BlenderBot 90M model (requires https://github.com/huggingface/transformers/pull/8205), producing the following output: https://gist.github.com/guillaume-be/380b98ec1ef91d0f6e3add5914dd92ce
## Who can review?
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8207/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8207",
"html_url": "https://github.com/huggingface/transformers/pull/8207",
"diff_url": "https://github.com/huggingface/transformers/pull/8207.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8207.patch",
"merged_at": 1604417582000
} |
https://api.github.com/repos/huggingface/transformers/issues/8206 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8206/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8206/comments | https://api.github.com/repos/huggingface/transformers/issues/8206/events | https://github.com/huggingface/transformers/issues/8206 | 733,686,719 | MDU6SXNzdWU3MzM2ODY3MTk= | 8,206 | Sentence transformer Segmentation Fault - Pytorch 1.4.0, 2.80 | {
"login": "elangovana",
"id": 5715658,
"node_id": "MDQ6VXNlcjU3MTU2NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5715658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elangovana",
"html_url": "https://github.com/elangovana",
"followers_url": "https://api.github.com/users/elangovana/followers",
"following_url": "https://api.github.com/users/elangovana/following{/other_user}",
"gists_url": "https://api.github.com/users/elangovana/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elangovana/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elangovana/subscriptions",
"organizations_url": "https://api.github.com/users/elangovana/orgs",
"repos_url": "https://api.github.com/users/elangovana/repos",
"events_url": "https://api.github.com/users/elangovana/events{/privacy}",
"received_events_url": "https://api.github.com/users/elangovana/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This will also be fixed by #8073 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.4 (Both)
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Doesn't Matter
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert):
The problem arises when using:
* [ ] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Open python console, with torch 1.4 installed
```python
import transformers
```
This crashes with error
```text
segementation fault
```
## Expected behavior
Works as normal without error
<!-- A clear and concise description of what you would expect to happen. -->
The only way to fix this is to force install sentencepiece==0.1.91 . The root cause is that https://github.com/huggingface/transformers/blob/v2.8.0/setup.py doesn't fix the version of sentencepiece.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8206/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8205 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8205/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8205/comments | https://api.github.com/repos/huggingface/transformers/issues/8205/events | https://github.com/huggingface/transformers/pull/8205 | 733,677,350 | MDExOlB1bGxSZXF1ZXN0NTEzNDAxODk4 | 8,205 | [Bug fix] Fixed value for BlenderBot pad token | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
The current `BlenderbotSmallTokenizer` has an incorrect (probably a typo) value for the `pad_token`. This causes the BlenderBot model to crash on padded sequences (currently pads with a value that exceeds the embedding matrix size).
This PR fixes the behaviour and the tokenizer now pads correctly with `0`.
## Who can review?
Blenderbot, Bart, Marian, Pegasus: @sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8205/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8205",
"html_url": "https://github.com/huggingface/transformers/pull/8205",
"diff_url": "https://github.com/huggingface/transformers/pull/8205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8205.patch",
"merged_at": 1604244118000
} |
https://api.github.com/repos/huggingface/transformers/issues/8204 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8204/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8204/comments | https://api.github.com/repos/huggingface/transformers/issues/8204/events | https://github.com/huggingface/transformers/issues/8204 | 733,627,414 | MDU6SXNzdWU3MzM2Mjc0MTQ= | 8,204 | [Benchmark] | {
"login": "123-kalai",
"id": 58583038,
"node_id": "MDQ6VXNlcjU4NTgzMDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/58583038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/123-kalai",
"html_url": "https://github.com/123-kalai",
"followers_url": "https://api.github.com/users/123-kalai/followers",
"following_url": "https://api.github.com/users/123-kalai/following{/other_user}",
"gists_url": "https://api.github.com/users/123-kalai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/123-kalai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/123-kalai/subscriptions",
"organizations_url": "https://api.github.com/users/123-kalai/orgs",
"repos_url": "https://api.github.com/users/123-kalai/repos",
"events_url": "https://api.github.com/users/123-kalai/events{/privacy}",
"received_events_url": "https://api.github.com/users/123-kalai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8204/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8203 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8203/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8203/comments | https://api.github.com/repos/huggingface/transformers/issues/8203/events | https://github.com/huggingface/transformers/pull/8203 | 733,618,936 | MDExOlB1bGxSZXF1ZXN0NTEzMzYyODE1 | 8,203 | Add TFDPR | {
"login": "ratthachat",
"id": 56621342,
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratthachat",
"html_url": "https://github.com/ratthachat",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @LysandreJik , thanks for the great review! \r\nI will fix and test the code as suggested.\r\n\r\nHowever, I have a very newbie question about style and document fixing. I admit that I can access only Colab (I have only windows PC) so I am really not sure how to do `make fixup` and `make docs` (in my understanding, should run in linux environment) . Could you please make some suggestions on this issue ?",
"Hmmm I think you could do the following in your colab environment:\r\n\r\n```py\r\n# Clone the repo or your fork, I assume this is your working directory\r\n!git clone https://github.com/huggingface/transformers\r\n!cd transformers\r\n\r\n# Install all the dev dependencies (you've probably done that already)\r\n!pip install -e .[dev]\r\n\r\n# Then you should be able to run `make fixup`\r\n!make fixup\r\n\r\n# Same for the docs!\r\n!make docs\r\n```\r\nLet me know if that works!",
"@LysandreJik , thanks again and I could run the two make commands.\r\n\r\n1) `make docs` , produced the error message which I have no clue, so I still need suggestion on this issue, sorry:\r\n```\r\ncd docs && make html SPHINXOPTS=\"-W\"\r\nmake[1]: Entering directory '/content/transformers/docs'\r\nRunning Sphinx v1.8.5\r\n\r\nExtension error:\r\nCould not import extension recommonmark (exception: No module named recommonmark)\r\nMakefile:19: recipe for target 'html' failed\r\nmake[1]: *** [html] Error 2\r\nmake[1]: Leaving directory '/content/transformers/docs'\r\nMakefile:68: recipe for target 'docs' failed\r\nmake: *** [docs] Error 2\r\n```\r\n\r\n\r\n2) `make fixup` , required `make fix-copies` and then `make fixup` suggest me to add more tests on `test_modeling_tf_dpr.py` which I will investigate this issue and come back :D",
"Hey @ratthachat - great work! I helped you a bit with the docs and did some cleaning. I think we can merge the PR soon. It would be great if you could take a look at the comments above (a lot of them should already be resolved now) and also it would be awesome if you could add one integration test to both TF and PT (we forgot to do this originally for PyTorch).\r\n\r\nAn integration tests should be a slow test, where you can just statically type some input_ids vector and run it through one of the PyTorch pretrained models and test against its expected output. You should use the same input / expected output array then for Tensorflow, similar to how it's done here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4185b115d4b3fd408265ffd91581698325652c47/tests/test_modeling_roberta.py#L423\r\n\r\nLet me know if you have any questions!",
"Thanks very much for your great help, Patrick @patrickvonplaten !! I will get back to you guys as soon as possible.",
"Hi guys, with the great helps of Patrick, most comments of @LysandreJik were already dealt with. So I further addressed the rest as replied above. BTW, I did only very minimal and necessary changes, but many tests are now failed again , sorry.. I have no idea about this :( .\r\n\r\n@patrickvonplaten I added one slow model integration test to `test_modeling_tf_dpr.py` . However, at the moment I still could not find a way to run original DPR repo to produce original output yet. So at the moment, the integration test is just a chek that TF and Pytorch `DPRQuestionEncoder` models produce the same output (with acceptable margin of error) -- i.e. the `tf.constant` expected slice comes from Pytorch's model.\r\n\r\n(test can be played around here : https://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing )\r\n\r\nI will come back to add more model integration tests if I succeed to run the [original DPR](https://github.com/facebookresearch/DPR/blob/master/generate_dense_embeddings.py).",
"> Hi Patrick, I have a question. At the moment, we do not have native TF weights, so removing this is OK ?\r\n\r\nI uploaded them a minute ago ;-) ",
"Thanks so much everyone. Very happy :D 👯 \r\nSee you guys again soon on TFRag (WIP)",
"@patrickvonplaten I think you didn't upload all of the weights on the model hub. I'm uploading the remaining weights now:\r\n\r\n- `facebook/dpr-ctx_encoder-single-nq-base`\r\n- `facebook/dpr-ctx_encoder-multiset-base`\r\n- `facebook/dpr-question_encoder-multiset-base`\r\n- `facebook/dpr-reader-single-nq-base`\r\n- `facebook/dpr-reader-multiset-base`\r\n",
"(Current slow CI is failing because it's trying to load some of them)",
"They're all uploaded."
] | 1,604 | 1,608 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
Add `TFDPRContextEncoder, TFDPRQuestionEncoder` and `TFDPRReader` in `modeling_tf_dpr.py`, as well as other relevant files in the [checklist](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model) .
Now the TF model works properly and can load Pytorch's weights successfully the same output as Pytorch's counterparts **except** small random noise (1e-5) which I suspect of some dtypes different , but I could not find the cause.
We can try playing TFDPR models and compare to Pytorch's ones [here in Colab](https://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case.
Here: https://github.com/huggingface/transformers/issues/8171
- [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
Yes, I write simple test whether TF and Pytorch models with pretrained weights give the same output (except very small random noise). The test is in Colab (shared above).
Also the model is passed all 27 tests in test_modeling_tf_dpr.py file (Please see the last cell in Colab above)
## Who can review?
@LysandreJik
# Details what were done according to Checklist
## Adding model/configuration/tokenization classes
Mostly done due to pre-existing of Pytorch's DPR.
- [X] Copy the python files from the present folder to the main folder and rename them, replacing `xxx` with your model
name.
- [X] Edit the files to replace `XXX` (with various casing) with your model name.
- [X] Copy-paste or create a simple configuration class for your model in the `configuration_...` file.
- [X] Copy-paste or create the code for your model in the `modeling_...` files (PyTorch and TF 2.0).
- [X] Copy-paste or create a tokenizer class for your model in the `tokenization_...` file.
## Adding conversion scripts
- [ ] Copy the conversion script (`convert_...`) from the present folder to the main folder.
- [ ] Edit this script to convert your original checkpoint weights to the current pytorch ones.
Not sure what to do since there already exists DPR pretrained weights.
## Adding tests:
- [X] Copy the python files from the `tests` sub-folder of the present folder to the `tests` subfolder of the main
folder and rename them, replacing `xxx` with your model name.
- [X] Edit the tests files to replace `XXX` (with various casing) with your model name.
- [X] Edit the tests code as needed.
The model is passed all 27 tests in test_modeling_tf_dpr.py file (Please see the last cell in Colab above) -- This is my updated 4 days after made a 1st PR.
## Documenting your model:
- [X] Make sure all your arguments are properly documented in your configuration and tokenizer.
- [X] Most of the documentation of the models is automatically generated, you just have to make sure that
`XXX_START_DOCSTRING` contains an introduction to the model you're adding and a link to the original
article and that `XXX_INPUTS_DOCSTRING` contains all the inputs of your model.
- [X] Create a new page `xxx.rst` in the folder `docs/source/model_doc` and add this file in `docs/source/index.rst`.
## Final steps
(Note the Pytorch DPR was already existed, so I assume I should check as "Done" )
- [X] Add import for all the relevant classes in `__init__.py`.
- [X] Add your configuration in `configuration_auto.py`.
- [X] Add your PyTorch and TF 2.0 model respectively in `modeling_auto.py` and `modeling_tf_auto.py`.
- [X] Add your tokenizer in `tokenization_auto.py`.
- [ ] Add a link to your conversion script in the main conversion utility (in `commands/convert.py`)
- [X] Edit the PyTorch to TF 2.0 conversion script to add your model in the `convert_pytorch_checkpoint_to_tf2.py`
file.
- [ ] Add a mention of your model in the doc: `README.md` and the documentation itself
in `docs/source/pretrained_models.rst`. Rune `make fix-copies` to update `docs/source/index.rst` with your changes.
- [X] Upload the pretrained weights, configurations and vocabulary files.
- [ ] Create model card(s) for your models on huggingface.co. For those last two steps, check the
[model sharing documentation](https://huggingface.co/transformers/model_sharing.html).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8203/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8203/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8203",
"html_url": "https://github.com/huggingface/transformers/pull/8203",
"diff_url": "https://github.com/huggingface/transformers/pull/8203.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8203.patch",
"merged_at": 1605115690000
} |
https://api.github.com/repos/huggingface/transformers/issues/8202 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8202/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8202/comments | https://api.github.com/repos/huggingface/transformers/issues/8202/events | https://github.com/huggingface/transformers/issues/8202 | 733,596,217 | MDU6SXNzdWU3MzM1OTYyMTc= | 8,202 | 'SummaryWriter' object has no attribute 'add_hparams' | {
"login": "ZiningZhu",
"id": 9517560,
"node_id": "MDQ6VXNlcjk1MTc1NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9517560?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZiningZhu",
"html_url": "https://github.com/ZiningZhu",
"followers_url": "https://api.github.com/users/ZiningZhu/followers",
"following_url": "https://api.github.com/users/ZiningZhu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZiningZhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZiningZhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZiningZhu/subscriptions",
"organizations_url": "https://api.github.com/users/ZiningZhu/orgs",
"repos_url": "https://api.github.com/users/ZiningZhu/repos",
"events_url": "https://api.github.com/users/ZiningZhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZiningZhu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-4.15.0-72-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.12
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Tried both 1 gpu and 2 gpus. Got the same result.
Additional env information from `pip freeze`:
- tensorboardX==1.6
- tensorflow==2.2.0 (I did not include tensorflow in this current conda environment, but do have that in the system, so I think pip reads from that. `import tensorflow` in a python script would cause `ImportError`, so tensorflow should be considered uninstalled here).
### Who can help
@sgugger
## Information
Model I am using (Bert, XLNet ...): `bert-base-cased`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below; in steps to reproduce the situation)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) MNLI
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Copy the `run_glue.py` from [cdc48ce](https://github.com/huggingface/transformers/commit/cdc48ce92ddf50e7ad871376be651638268b2e9a) (the newest version up till now).
2. Comment out the `from transformers.trainer_utils import is_main_process` line, and insert below (because this importing throws some exception. Pasting this code circumvents the problem):
```
def is_main_process(local_rank):
"""
Whether or not the current process is the local process,basedon`local_rank`.
"""
return local_rank in [-1, 0]
```
3. Run the following scripts.
```
export GLUE_DIR=../../data/glue_data
export TASK_NAME=MNLI
python run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--do_predict \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--learning_rate 2e-5 \
--num_train_epochs 2 \
--output_dir $TASK_NAME/
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The error message is:
```
Traceback (most recent call last):
File "run_glue.py", line 421, in <module>
main()
File "run_glue.py", line 356, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/h/zining/.conda/envs/myenv/lib/python3.6/site-packages/transformers/trainer.py", line 717, in train
self.control = self.callback_handler.on_train_begin(self.args, self.state, self.control)
File "/h/zining/.conda/envs/myenv/lib/python3.6/site-packages/transformers/trainer_callback.py", line 329, in on_train_begin
return self.call_event("on_train_begin", args, state, control)
File "/h/zining/.conda/envs/myenv/lib/python3.6/site-packages/transformers/trainer_callback.py", line 376, in call_event
**kwargs,
File "/h/zining/.conda/envs/myenv/lib/python3.6/site-packages/transformers/integrations.py", line 218, in on_train_begin
self.tb_writer.add_hparams(args.to_sanitized_dict(), metric_dict={})
AttributeError: 'SummaryWriter' object has no attribute 'add_hparams'
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I think running the `run_glue.py` will finetune on some GLUE tasks.
Note: Issue #4511 is similar, but was threw in `trainer.py`. My issue is thrown in `trainer_callback.py`. I think these two issues are caused by different reasons. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8202/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8201 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8201/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8201/comments | https://api.github.com/repos/huggingface/transformers/issues/8201/events | https://github.com/huggingface/transformers/issues/8201 | 733,591,224 | MDU6SXNzdWU3MzM1OTEyMjQ= | 8,201 | New model addition | {
"login": "klaudia122195",
"id": 73721756,
"node_id": "MDQ6VXNlcjczNzIxNzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/73721756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klaudia122195",
"html_url": "https://github.com/klaudia122195",
"followers_url": "https://api.github.com/users/klaudia122195/followers",
"following_url": "https://api.github.com/users/klaudia122195/following{/other_user}",
"gists_url": "https://api.github.com/users/klaudia122195/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klaudia122195/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klaudia122195/subscriptions",
"organizations_url": "https://api.github.com/users/klaudia122195/orgs",
"repos_url": "https://api.github.com/users/klaudia122195/repos",
"events_url": "https://api.github.com/users/klaudia122195/events{/privacy}",
"received_events_url": "https://api.github.com/users/klaudia122195/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"##",
"P#86",
"user blocked"
] | 1,604 | 1,604 | 1,604 | NONE | null | # 🌟 New model addition
## Model description
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [x] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8201/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8200 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8200/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8200/comments | https://api.github.com/repos/huggingface/transformers/issues/8200/events | https://github.com/huggingface/transformers/issues/8200 | 733,584,454 | MDU6SXNzdWU3MzM1ODQ0NTQ= | 8,200 | Mmmmianam | {
"login": "klaudia122195",
"id": 73721756,
"node_id": "MDQ6VXNlcjczNzIxNzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/73721756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klaudia122195",
"html_url": "https://github.com/klaudia122195",
"followers_url": "https://api.github.com/users/klaudia122195/followers",
"following_url": "https://api.github.com/users/klaudia122195/following{/other_user}",
"gists_url": "https://api.github.com/users/klaudia122195/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klaudia122195/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klaudia122195/subscriptions",
"organizations_url": "https://api.github.com/users/klaudia122195/orgs",
"repos_url": "https://api.github.com/users/klaudia122195/repos",
"events_url": "https://api.github.com/users/klaudia122195/events{/privacy}",
"received_events_url": "https://api.github.com/users/klaudia122195/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"😁",
"#876 "
] | 1,604 | 1,604 | 1,604 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8200/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8199 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8199/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8199/comments | https://api.github.com/repos/huggingface/transformers/issues/8199/events | https://github.com/huggingface/transformers/issues/8199 | 733,509,030 | MDU6SXNzdWU3MzM1MDkwMzA= | 8,199 | Sentencepiece dependency causing docker build to fail | {
"login": "joshzwiebel",
"id": 34662010,
"node_id": "MDQ6VXNlcjM0NjYyMDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/34662010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshzwiebel",
"html_url": "https://github.com/joshzwiebel",
"followers_url": "https://api.github.com/users/joshzwiebel/followers",
"following_url": "https://api.github.com/users/joshzwiebel/following{/other_user}",
"gists_url": "https://api.github.com/users/joshzwiebel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshzwiebel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshzwiebel/subscriptions",
"organizations_url": "https://api.github.com/users/joshzwiebel/orgs",
"repos_url": "https://api.github.com/users/joshzwiebel/repos",
"events_url": "https://api.github.com/users/joshzwiebel/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshzwiebel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This will be fixed when #8073 is merged.",
"Is there any help in terms of what version to pin to in order to avoid this? This is currently a huge blocker on my end.",
"On the sentencepiece side I don’t know (you can open an issue on their side to ask) but on the `transformers` side we are actively working on removing the hard dependency on sentencepiece and we estimate we should have a new release removing this dependency around the end of next week.\r\n\r\nCc @n1t0 and @Narsil whose work on `tokenizers` is essential to unlock this.",
"Great! Thanks for the info!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:3.4.0
- Platform:Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.7.0 no gpu
- Tensorflow version (GPU?): N/A
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
Question answering pipelines is the feature I am using. There is a problem with the sentencepiece dependency of transformers. It cannot find the correct package when installing and causing the build process to fail
The problem arises when using:
Downloading the transformers library onto a docker container running ubuntu.
The tasks I am working on is:
Uploadding a transformers script to AWS fargate
## To reproduce
Steps to reproduce the behavior:
1. Create a project
2. Try to create docker container using dockerfile attached below
I have attached the relevant parts of my dockerfile below
Dockerfile
```
FROM ubuntu:18.04
RUN mkdir /usr/app
WORKDIR /usr/app
# Add and install Python modules
COPY requirements.txt ./
RUN apt-get update
RUN apt-get -y install python3
RUN apt-get -y install python3-pip
RUN pip3 install virtualenv
ENV VIRTUAL_ENV=/venv
RUN virtualenv venv -p python3
ENV PATH="VIRTUAL_ENV/bin:$PATH"
RUN pip3 install transformers[torch]
RUN pip3 install torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
RUN pip3 install -r requirements.txt
# Bundle app source
COPY . ./
# Expose
EXPOSE 6000
# Run
CMD ["python", "app.py"]
```
This is the stack trace that comes back from running and trying to build using this dockerfile
```
[+] Building 571.8s (14/17)
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 835B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:18.04 0.8s
=> [internal] load build context 0.1s
=> => transferring context: 139.29kB 0.1s
=> [1/13] FROM docker.io/library/ubuntu:18.04@sha256:646942475da61b4ce9cc5b3fadb42642ea90e5d0de46111458e100ff2c7031e6 0.0s
=> CACHED [2/13] RUN mkdir /usr/app 0.0s
=> CACHED [3/13] WORKDIR /usr/app 0.0s
=> [4/13] COPY requirements.txt ./ 0.0s
=> [5/13] RUN apt-get update 30.8s
=> [6/13] RUN apt-get -y install python3 15.6s
=> [7/13] RUN apt-get -y install python3-pip 214.0s
=> [8/13] RUN pip3 install virtualenv 4.5s
=> [9/13] RUN virtualenv venv -p python3 0.8s
=> ERROR [10/13] RUN pip3 install transformers[torch] 305.1s
------
> [10/13] RUN pip3 install transformers[torch]:
#14 1.095 Collecting transformers[torch]
#14 1.410 Downloading https://files.pythonhosted.org/packages/2c/4e/4f1ede0fd7a36278844a277f8d53c21f88f37f3754abf76a5d6224f76d4a/
transformers-3.4.0-py3-none-any.whl (1.3MB)
#14 1.897 Collecting numpy (from transformers[torch])
#14 2.507 Downloading https://files.pythonhosted.org/packages/8f/40/ddb5109614aabad67e6fe426b3579a879b7b3cdd375eb27af467c4367ae0/
numpy-1.19.3-cp36-cp36m-manylinux1_x86_64.whl (13.4MB)
#14 5.947 Requirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers[torch])
#14 5.953 Collecting sentencepiece!=0.1.92 (from transformers[torch])
#14 6.174 Downloading https://files.pythonhosted.org/packages/72/e0/57edbab017a204e9f39448c1717292437a45b5f7cf3a9dbf4a9c026b03c5/
sentencepiece-0.1.94.tar.gz (507kB)
#14 6.575 Collecting sacremoses (from transformers[torch])
#14 6.714 Downloading https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/
sacremoses-0.0.43.tar.gz (883kB)
#14 7.158 Collecting requests (from transformers[torch])
#14 7.341 Downloading https://files.pythonhosted.org/packages/45/1e/0c169c6a5381e241ba7404532c16a21d86ab872c9bed8bdcd4c423954103/
requests-2.24.0-py2.py3-none-any.whl (61kB)
#14 7.404 Collecting packaging (from transformers[torch])
#14 7.551 Downloading https://files.pythonhosted.org/packages/46/19/c5ab91b1b05cfe63cccd5cfc971db9214c6dd6ced54e33c30d5af1d2bc43/
packaging-20.4-py2.py3-none-any.whl
#14 7.579 Collecting dataclasses; python_version < "3.7" (from transformers[torch])
#14 7.708 Downloading https://files.pythonhosted.org/packages/e1/d2/6f02df2616fd4016075f60157c7a0452b38d8f7938ae94343911e0fb0b09/
dataclasses-0.7-py3-none-any.whl
#14 7.725 Collecting tokenizers==0.9.2 (from transformers[torch])
#14 8.019 Downloading https://files.pythonhosted.org/packages/7c/a5/78be1a55b2ac8d6a956f0a211d372726e2b1dd2666bb537fea9b03abd62c/
tokenizers-0.9.2-cp36-cp36m-manylinux1_x86_64.whl (2.9MB)
#14 8.732 Collecting regex!=2019.12.17 (from transformers[torch])
#14 9.570 Downloading https://files.pythonhosted.org/packages/87/9f/aad666560082cb11331167cbb31cf0e8bd90af8ea4951436d1fcb2ddde44/
regex-2020.10.28-cp36-cp36m-manylinux1_x86_64.whl (666kB)
#14 9.756 Collecting protobuf (from transformers[torch])
#14 10.02 Downloading https://files.pythonhosted.org/packages/30/79/510974552cebff2ba04038544799450defe75e96ea5f1675dbf72cc8744f/
protobuf-3.13.0-cp36-cp36m-manylinux1_x86_64.whl (1.3MB)
#14 10.36 Collecting tqdm>=4.27 (from transformers[torch])
#14 10.57 Downloading https://files.pythonhosted.org/packages/93/3a/96b3dc293aa72443cf9627444c3c221a7ba34bb622e4d8bf1b5d4f2d9d08/
tqdm-4.51.0-py2.py3-none-any.whl (70kB)
#14 10.60 Collecting torch>=1.0; extra == "torch" (from transformers[torch])
#14 10.79 Downloading https://files.pythonhosted.org/packages/80/2a/58f8078744e0408619c63148f7a2e8e48cf007e4146b74d4bb67c56d161b/
torch-1.7.0-cp36-cp36m-manylinux1_x86_64.whl (776.7MB)
#14 285.6 Collecting click (from sacremoses->transformers[torch])
#14 292.4 Downloading https://files.pythonhosted.org/packages/d2/3d/fa76db83bf75c4f8d338c2fd15c8d33fdd7ad23a9b5e57eb6c5de26b430e/
click-7.1.2-py2.py3-none-any.whl (82kB)
#14 292.4 Collecting joblib (from sacremoses->transformers[torch])
#14 292.6 Downloading https://files.pythonhosted.org/packages/fc/c9/f58220ac44a1592f79a343caba12f6837f9e0c04c196176a3d66338e1ea8/
joblib-0.17.0-py3-none-any.whl (301kB)
#14 292.8 Requirement already satisfied: six in /usr/lib/python3/dist-packages (from sacremoses->transformers[torch])
#14 292.8 Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests->transformers[torch])
#14 293.0 Downloading https://files.pythonhosted.org/packages/56/aa/4ef5aa67a9a62505db124a5cb5262332d1d4153462eb8fd89c9fa41e5d92/
urllib3-1.25.11-py2.py3-none-any.whl (127kB)
#14 293.0 Collecting chardet<4,>=3.0.2 (from requests->transformers[torch])
#14 293.2 Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/
chardet-3.0.4-py2.py3-none-any.whl (133kB)
#14 293.2 Collecting certifi>=2017.4.17 (from requests->transformers[torch])
#14 293.4 Downloading https://files.pythonhosted.org/packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/
certifi-2020.6.20-py2.py3-none-any.whl (156kB)
#14 293.4 Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3/dist-packages (from requests->transformers[torch])
#14 293.4 Collecting pyparsing>=2.0.2 (from packaging->transformers[torch])
#14 293.7 Downloading https://files.pythonhosted.org/packages/8a/bb/488841f56197b13700afd5658fc279a2025a39e22449b7cf29864669b15d/
pyparsing-2.4.7-py2.py3-none-any.whl (67kB)
#14 293.7 Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from protobuf->transformers[torch])
#14 293.7 Collecting future (from torch>=1.0; extra == "torch"->transformers[torch])
#14 293.9 Downloading https://files.pythonhosted.org/packages/45/0b/38b06fd9b92dc2b68d58b75f900e97884c45bedd2ff83203d933cf5851c9/
future-0.18.2.tar.gz (829kB)
#14 294.7 Collecting typing-extensions (from torch>=1.0; extra == "torch"->transformers[torch])
#14 294.9 Downloading https://files.pythonhosted.org/packages/60/7a/e881b5abb54db0e6e671ab088d079c57ce54e8a01a3ca443f561ccadb37e/
typing_extensions-3.7.4.3-py3-none-any.whl
#14 294.9 Building wheels for collected packages: sentencepiece, sacremoses, future
#14 294.9 Running setup.py bdist_wheel for sentencepiece: started
#14 295.6 Running setup.py bdist_wheel for sentencepiece: finished with status 'error'
#14 295.6 Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-o837wqyj/sent
encepiece/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __
file__, 'exec'))" bdist_wheel -d /tmp/tmprwjlzrwrpip-wheel- --python-tag cp36:
#14 295.6 /usr/lib/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type'
#14 295.6 warnings.warn(msg)
#14 295.6 running bdist_wheel
#14 295.6 running build
#14 295.6 running build_py
#14 295.6 creating build
#14 295.6 creating build/lib.linux-x86_64-3.6
#14 295.6 creating build/lib.linux-x86_64-3.6/sentencepiece
#14 295.6 copying src/sentencepiece/__init__.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 295.6 copying src/sentencepiece/sentencepiece_model_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 295.6 copying src/sentencepiece/sentencepiece_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 295.6 running build_ext
#14 295.6 /bin/sh: 1: pkg-config: not found
#14 295.6 ./build_bundled.sh: 8: ./build_bundled.sh: git: not found
#14 295.6 ./build_bundled.sh: 10: ./build_bundled.sh: git: not found
#14 295.6 ./build_bundled.sh: 12: cd: can't cd to sentencepiece
#14 295.6 ./build_bundled.sh: 15: ./build_bundled.sh: cmake: not found
#14 295.6 make: *** No targets specified and no makefile found. Stop.
#14 295.6 make: *** No rule to make target 'install'. Stop.
#14 295.6 env: 'pkg-config': No such file or directory
#14 295.6 Failed to find sentencepiece pkg-config
#14 295.6
#14 295.6 ----------------------------------------
#14 295.6 Failed building wheel for sentencepiece
#14 295.6 Running setup.py clean for sentencepiece
#14 295.8 Running setup.py bdist_wheel for sacremoses: started
#14 296.3 Running setup.py bdist_wheel for sacremoses: finished with status 'done'
#14 296.3 Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45
#14 296.4 Running setup.py bdist_wheel for future: started
#14 297.2 Running setup.py bdist_wheel for future: finished with status 'done'
#14 297.2 Stored in directory: /root/.cache/pip/wheels/8b/99/a0/81daf51dcd359a9377b110a8a886b3895921802d2fc1b2397e
#14 297.3 Successfully built sacremoses future
#14 297.3 Failed to build sentencepiece
#14 297.3 Installing collected packages: numpy, sentencepiece, click, joblib, regex, tqdm, sacremoses, urllib3, chardet, certifi, r
equests, pyparsing, packaging, dataclasses, tokenizers, protobuf, future, typing-extensions, torch, transformers
#14 301.3 Running setup.py install for sentencepiece: started
#14 301.7 Running setup.py install for sentencepiece: finished with status 'error'
#14 301.7 Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-o837wqyj/se
ntencepiece/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code,
__file__, 'exec'))" install --record /tmp/pip-7ji4iyud-record/install-record.txt --single-version-externally-managed --compile:
#14 301.7 /usr/lib/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type'
#14 301.7 warnings.warn(msg)
#14 301.7 running install
#14 301.7 running build
#14 301.7 running build_py
#14 301.7 creating build
#14 301.7 creating build/lib.linux-x86_64-3.6
#14 301.7 creating build/lib.linux-x86_64-3.6/sentencepiece
#14 301.7 copying src/sentencepiece/__init__.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 301.7 copying src/sentencepiece/sentencepiece_model_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 301.7 copying src/sentencepiece/sentencepiece_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 301.7 running build_ext
#14 301.7 /bin/sh: 1: pkg-config: not found
#14 301.7 mkdir: cannot create directory 'bundled': File exists
#14 301.7 ./build_bundled.sh: 8: ./build_bundled.sh: git: not found
#14 301.7 ./build_bundled.sh: 10: ./build_bundled.sh: git: not found
#14 301.7 ./build_bundled.sh: 12: cd: can't cd to sentencepiece
#14 301.7 mkdir: cannot create directory 'build': File exists
#14 301.7 ./build_bundled.sh: 15: ./build_bundled.sh: cmake: not found
#14 301.7 make: *** No targets specified and no makefile found. Stop.
#14 301.7 make: *** No rule to make target 'install'. Stop.
#14 301.7 env: 'pkg-config': No such file or directory
#14 301.7 Failed to find sentencepiece pkg-config
#14 301.7
#14 301.7 ----------------------------------------
#14 302.4 Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-o837wqyj/sentencepiece/setup.py';f=
getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" inst
all --record /tmp/pip-7ji4iyud-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in
/tmp/pip-build-o837wqyj/sentencepiece/
------
failed to solve with frontend dockerfile.v0: failed to build LLB: executor failed running [/bin/sh -c pip3 install transformers[tor
ch]]: runc did not terminate sucessfully
```
## Expected behavior
I expect transformers to be downloaded and allow me to access it from the docker container.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8199/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8198 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8198/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8198/comments | https://api.github.com/repos/huggingface/transformers/issues/8198/events | https://github.com/huggingface/transformers/pull/8198 | 733,485,723 | MDExOlB1bGxSZXF1ZXN0NTEzMjU5NDA2 | 8,198 | Added 12 model cards for Indian Language Models | {
"login": "kushalj001",
"id": 32245327,
"node_id": "MDQ6VXNlcjMyMjQ1MzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32245327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kushalj001",
"html_url": "https://github.com/kushalj001",
"followers_url": "https://api.github.com/users/kushalj001/followers",
"following_url": "https://api.github.com/users/kushalj001/following{/other_user}",
"gists_url": "https://api.github.com/users/kushalj001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kushalj001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kushalj001/subscriptions",
"organizations_url": "https://api.github.com/users/kushalj001/orgs",
"repos_url": "https://api.github.com/users/kushalj001/repos",
"events_url": "https://api.github.com/users/kushalj001/events{/privacy}",
"received_events_url": "https://api.github.com/users/kushalj001/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Wow, so cool! Thanks for your contribution."
] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
This PR adds model cards for 12 language models which have been uploaded to the model hub recently over [here](https://huggingface.co/neuralspace-reverie). These cover 3 Indian languages and for each language there are 4 model variants namely: BERT, DistilBERT, RoBERTa and XLM-R.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8198/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8198/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8198",
"html_url": "https://github.com/huggingface/transformers/pull/8198",
"diff_url": "https://github.com/huggingface/transformers/pull/8198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8198.patch",
"merged_at": 1604294264000
} |
https://api.github.com/repos/huggingface/transformers/issues/8197 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8197/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8197/comments | https://api.github.com/repos/huggingface/transformers/issues/8197/events | https://github.com/huggingface/transformers/pull/8197 | 733,454,689 | MDExOlB1bGxSZXF1ZXN0NTEzMjMzMjc3 | 8,197 | Remove deprecated arguments from new run_clm | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | COLLABORATOR | null | # What does this PR do?
Fix a deprecated warning by replacing `tokenizer.max_len` with `tokenizer.model_max_length`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8197/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8197",
"html_url": "https://github.com/huggingface/transformers/pull/8197",
"diff_url": "https://github.com/huggingface/transformers/pull/8197.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8197.patch",
"merged_at": 1604086040000
} |
https://api.github.com/repos/huggingface/transformers/issues/8196 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8196/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8196/comments | https://api.github.com/repos/huggingface/transformers/issues/8196/events | https://github.com/huggingface/transformers/issues/8196 | 733,417,733 | MDU6SXNzdWU3MzM0MTc3MzM= | 8,196 | pytest Errors | {
"login": "dbl001",
"id": 3105499,
"node_id": "MDQ6VXNlcjMxMDU0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3105499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dbl001",
"html_url": "https://github.com/dbl001",
"followers_url": "https://api.github.com/users/dbl001/followers",
"following_url": "https://api.github.com/users/dbl001/following{/other_user}",
"gists_url": "https://api.github.com/users/dbl001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dbl001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dbl001/subscriptions",
"organizations_url": "https://api.github.com/users/dbl001/orgs",
"repos_url": "https://api.github.com/users/dbl001/repos",
"events_url": "https://api.github.com/users/dbl001/events{/privacy}",
"received_events_url": "https://api.github.com/users/dbl001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I got the same error while loading BERT tokeniser and model from torch hub",
"Hello! Do you mind pasting the result of `pip list` done in your environment? Thank you!",
"It’s an Anaconda virtual environment.\nPython 3.6.11\n\n$ pip list\nPackage Version Location\n--------------------------------- ------------------- ----------------------------------------------------------\nabsl-py 0.11.0\naiohttp 3.7.2\nappdirs 1.4.4\nargon2-cffi 20.1.0\nastor 0.8.1\nastunparse 1.6.3\nasync-generator 1.10\nasync-timeout 3.0.1\nattrs 20.2.0\nAutomat 20.2.0\nawscli 1.18.169\nBabel 2.8.0\nbackcall 0.2.0\nbackports.functools-lru-cache 1.6.1\nbcrypt 3.2.0\nbeautifulsoup4 4.9.3\nbertopic 0.2.3\nblack 20.8b1\nbleach 3.2.1\nblinker 1.4\nbokeh 2.2.3\nboto 2.49.0\nboto3 1.16.9\nbotocore 1.19.9\nbrotlipy 0.7.0\nbz2file 0.98\ncachetools 4.1.1\ncertifi 2020.6.20\ncffi 1.14.3\nchainer 7.7.0\nchardet 3.0.4\nclick 7.1.2\ncloudpickle 1.2.2\ncolorama 0.4.3\nconstantly 15.1.0\ncryptography 3.2.1\ncssselect 1.1.0\ncycler 0.10.0\ncymem 1.31.2\nCython 0.29.21\ndataclasses 0.7\ndecorator 4.4.2\ndeepdist 0.1\ndefusedxml 0.6.0\ndill 0.3.2\ndiskcache 4.0.0\ndocutils 0.15.2\nentrypoints 0.3\nfeynman 2.0.0\nfilelock 3.0.12\nfindspark 1.3.0\nFlask 1.1.2\nflatbuffers 1.12\nfuncy 1.15\nfuture 0.18.2\ngast 0.3.3\ngensim 3.8.3\ngoogle-auth 1.23.0\ngoogle-auth-oauthlib 0.4.2\ngoogle-pasta 0.2.0\ngoogleapis-common-protos 1.52.0\ngrpcio 1.33.2\nh5py 2.10.0\nhdbscan 0.8.26\nhtml5lib 1.1\nhyperlink 20.0.1\nhypothesis 5.41.0\nidna 2.10\nidna-ssl 1.1.0\nimportlib-metadata 2.0.0\nincremental 17.5.0\niniconfig 1.1.1\nipykernel 5.3.4\nipython 7.12.0\nipython-genutils 0.2.0\nipywidgets 7.5.1\nitemadapter 0.1.1\nitemloaders 1.0.3\nitsdangerous 1.1.0\njedi 0.17.2\nJinja2 2.11.2\njmespath 0.10.0\njoblib 0.17.0\njson5 0.9.5\njsonschema 3.2.0\njupyter-client 6.1.7\njupyter-console 6.2.0\njupyter-contrib-core 0.3.3\njupyter-core 4.6.3\njupyter-nbextensions-configurator 0.4.1\njupyterlab 2.2.9\njupyterlab-pygments 0.1.2\njupyterlab-server 1.2.0\nKeras-Applications 1.0.8\nKeras-Preprocessing 1.1.2\nkiwisolver 1.3.0\nllvmlite 0.34.0\nlxml 4.6.1\nMarkdown 3.3.3\nMarkupSafe 1.1.1\nmatplotlib 3.3.2\nmistune 0.8.4\nmnist 0.2.2\nmore-itertools 8.6.0\nmpmath 1.1.0\nMulticoreTSNE 0.1\nmultidict 4.7.5\nmurmurhash 0.26.4\nmypy-extensions 0.4.3\nnbclient 0.5.1\nnbconvert 6.0.7\nnbformat 5.0.8\nnest-asyncio 1.4.1\nnltk 3.4.4\nnotebook 6.1.4\nnumba 0.51.2\nnumexpr 2.7.1\nnumpy 1.19.2\noauthlib 3.0.1\nolefile 0.46\nopt-einsum 3.3.0\npackaging 20.4\npandas 1.1.4\npandocfilters 1.4.2\nparameterized 0.7.4\nparsel 1.6.0\nparso 0.7.1\npathspec 0.8.0\npatsy 0.5.1\npetastorm 0.7.6 /home/ubuntu/petastorm\npexpect 4.8.0\npickleshare 0.7.5\nPillow 8.0.1\npip 20.2.4\nplac 1.0.0\npluggy 0.13.1\npreshed 0.46.4\nprometheus-client 0.8.0\npromise 2.3\nprompt-toolkit 3.0.8\nProtego 0.1.16\nprotobuf 3.13.0\npsutil 5.7.3\nptyprocess 0.6.0\npy 1.9.0\npy4j 0.10.9\npyarrow 2.0.0\npyasn1 0.4.8\npyasn1-modules 0.2.7\npycparser 2.20\nPyDispatcher 2.0.5\npydot 1.4.1\nPygments 2.7.2\nPyHamcrest 2.0.2\nPyJWT 1.7.1\npyLDAvis 2.1.2\npyOpenSSL 19.1.0\npyparsing 2.4.7\nPyQt5 5.12.3\nPyQt5-sip 4.19.18\nPyQtChart 5.12\nPyQtWebEngine 5.12.1\npyrsistent 0.17.3\nPySocks 1.7.1\npyspark 3.0.1\npytest 6.1.2\npython-dateutil 2.8.1\npytz 2020.1\nPyWavelets 1.1.1\nPyYAML 5.3.1\npyzmq 19.0.2\nqtconsole 4.7.7\nQtPy 1.9.0\nqueuelib 1.5.0\nregex 2020.10.28\nrequests 2.24.0\nrequests-oauthlib 1.3.0\nrsa 4.4.1\ns3transfer 0.3.3\nsacremoses 0.0.43\nscapy 2.4.4\nscikit-learn 0.23.2\nscipy 1.5.2\nScrapy 2.4.0\nseaborn 0.11.0\nsemver 2.8.1\nSend2Trash 1.5.0\nsense2vec 0.6.0\nsentence-transformers 0.3.6\nsentencepiece 0.1.91\nservice-identity 18.1.0\nsetuptools 49.6.0.post20201009\nsix 1.15.0\nsklearn 0.0\nsmart-open 1.6.0\nsortedcontainers 2.2.2\nsoupsieve 2.0.1\nspacy 0.101.0\nsputnik 0.9.3\nstatsmodels 0.12.1\nsympy 1.6.2\ntensorboard 2.3.0\ntensorboard-plugin-wit 1.7.0\ntensorflow 2.2.0\ntensorflow-datasets 1.2.0\ntensorflow-estimator 2.2.0\ntensorflow-metadata 0.14.0\ntensorflow-probability 0.6.0\ntensorflowonspark 1.4.1\ntermcolor 1.1.0\nterminado 0.9.1\ntestpath 0.4.4\ntfp-nightly 0.5.0.dev20190522\nthinc 5.0.8\nthreadpoolctl 2.1.0\ntimeout-decorator 0.4.1\ntokenizers 0.9.2\ntoml 0.10.1\ntorch 1.7.0\ntorchaudio 0.7.0a0+ac17b64\ntorchvision 0.8.1\ntornado 6.1\ntqdm 4.51.0\ntraitlets 4.3.3\ntransformers 3.1.0 /home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages\nTwisted 20.3.0\ntwython 3.8.2\ntyped-ast 1.4.1\ntyping-extensions 3.7.4.3\numap-learn 0.4.6\nurllib3 1.25.11\nw3lib 1.22.0\nwcwidth 0.2.5\nwebencodings 0.5.1\nWerkzeug 1.0.1\nwheel 0.35.1\nwidgetsnbextension 3.5.1\nwordcloud 1.8.0\nwrapt 1.12.1\nyarl 1.6.2\nzipp 3.4.0\nzope.interface 5.1.2\n> On Nov 2, 2020, at 7:33 AM, Lysandre Debut <[email protected]> wrote:\n> \n> \n> Hello! Do you mind pasting the result of pip list done in your environment? Thank you!\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/8196#issuecomment-720544945>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAXWFW2Z4DMWNXRLVPMHSD3SN3GLNANCNFSM4TFJGMGQ>.\n> \n\n",
"It seems you have a conflict between your `transformers` version, as `transformers-cli env` returns v3.4.0, while your `pip list` returns v3.1.0?",
"Mea culpa! I sent you the pip list from my Mac.\nHere’s the Ubuntu 20.04 LTS results\n\n$ conda list transformers\n# packages in environment at /home/ubuntu/anaconda2/envs/ai:\n#\n# Name Version Build Channel\nsentence-transformers 0.3.6 pypi_0 pypi\ntransformers 3.4.0 dev_0 <develop>\n(ai) ubuntu@ip-10-0-1-82:~/transformers$ \n\n\n$ pip list\nPackage Version Location\n--------------------------------- ------------------- ---------------------------------------------------------------------------------------\nabsl-py 0.11.0\naiohttp 3.7.2\nappdirs 1.4.4\nargon2-cffi 20.1.0\nastor 0.8.1\nastunparse 1.6.3\nasync-generator 1.10\nasync-timeout 3.0.1\nattrs 20.2.0\nAutomat 20.2.0\nawscli 1.18.169\nBabel 2.8.0\nbackcall 0.2.0\nbackports.functools-lru-cache 1.6.1\nbcrypt 3.2.0\nbeautifulsoup4 4.9.3\nbertopic 0.2.3\nblack 20.8b1\nbleach 3.2.1\nblinker 1.4\nbokeh 2.2.3\nboto 2.49.0\nboto3 1.16.9\nbotocore 1.19.9\nbrotlipy 0.7.0\nbz2file 0.98\ncachetools 4.1.1\ncertifi 2020.6.20\ncffi 1.14.3\nchainer 7.7.0\nchardet 3.0.4\nclick 7.1.2\ncloudpickle 1.2.2\ncolorama 0.4.3\nconstantly 15.1.0\ncryptography 3.2.1\ncssselect 1.1.0\ncycler 0.10.0\ncymem 1.31.2\nCython 0.29.21\ndataclasses 0.7\ndecorator 4.4.2\ndeepdist 0.1\ndefusedxml 0.6.0\ndill 0.3.2\ndiskcache 4.0.0\ndocutils 0.15.2\nentrypoints 0.3\nfeynman 2.0.0\nfilelock 3.0.12\nfindspark 1.3.0\nFlask 1.1.2\nflatbuffers 1.12\nfuncy 1.15\nfuture 0.18.2\ngast 0.3.3\ngensim 3.8.3\ngoogle-auth 1.23.0\ngoogle-auth-oauthlib 0.4.2\ngoogle-pasta 0.2.0\ngoogleapis-common-protos 1.52.0\ngrpcio 1.33.2\nh5py 2.10.0\nhdbscan 0.8.26\nhtml5lib 1.1\nhyperlink 20.0.1\nhypothesis 5.41.0\nidna 2.10\nidna-ssl 1.1.0\nimportlib-metadata 2.0.0\nincremental 17.5.0\niniconfig 1.1.1\nipykernel 5.3.4\nipython 7.12.0\nipython-genutils 0.2.0\nipywidgets 7.5.1\nitemadapter 0.1.1\nitemloaders 1.0.3\nitsdangerous 1.1.0\njedi 0.17.2\nJinja2 2.11.2\njmespath 0.10.0\njoblib 0.17.0\njson5 0.9.5\njsonschema 3.2.0\njupyter-client 6.1.7\njupyter-console 6.2.0\njupyter-contrib-core 0.3.3\njupyter-core 4.6.3\njupyter-nbextensions-configurator 0.4.1\njupyterlab 2.2.9\njupyterlab-pygments 0.1.2\njupyterlab-server 1.2.0\nKeras-Applications 1.0.8\nKeras-Preprocessing 1.1.2\nkiwisolver 1.3.0\nllvmlite 0.34.0\nlxml 4.6.1\nMarkdown 3.3.3\nMarkupSafe 1.1.1\nmatplotlib 3.3.2\nmistune 0.8.4\nmnist 0.2.2\nmore-itertools 8.6.0\nmpmath 1.1.0\nMulticoreTSNE 0.1\nmultidict 4.7.5\nmurmurhash 0.26.4\nmypy-extensions 0.4.3\nnbclient 0.5.1\nnbconvert 6.0.7\nnbformat 5.0.8\nnest-asyncio 1.4.1\nnltk 3.4.4\nnotebook 6.1.4\nnumba 0.51.2\nnumexpr 2.7.1\nnumpy 1.19.2\noauthlib 3.0.1\nolefile 0.46\nopt-einsum 3.3.0\npackaging 20.4\npandas 1.1.4\npandocfilters 1.4.2\nparameterized 0.7.4\nparsel 1.6.0\nparso 0.7.1\npathspec 0.8.0\npatsy 0.5.1\npetastorm 0.7.6 /home/ubuntu/petastorm\npexpect 4.8.0\npickleshare 0.7.5\nPillow 8.0.1\npip 20.2.4\nplac 1.0.0\npluggy 0.13.1\npreshed 0.46.4\nprometheus-client 0.8.0\npromise 2.3\nprompt-toolkit 3.0.8\nProtego 0.1.16\nprotobuf 3.13.0\npsutil 5.7.3\nptyprocess 0.6.0\npy 1.9.0\npy4j 0.10.9\npyarrow 2.0.0\npyasn1 0.4.8\npyasn1-modules 0.2.7\npycparser 2.20\nPyDispatcher 2.0.5\npydot 1.4.1\nPygments 2.7.2\nPyHamcrest 2.0.2\nPyJWT 1.7.1\npyLDAvis 2.1.2\npyOpenSSL 19.1.0\npyparsing 2.4.7\nPyQt5 5.12.3\nPyQt5-sip 4.19.18\nPyQtChart 5.12\nPyQtWebEngine 5.12.1\npyrsistent 0.17.3\nPySocks 1.7.1\npyspark 3.0.1\npytest 6.1.2\npython-dateutil 2.8.1\npytz 2020.1\nPyWavelets 1.1.1\nPyYAML 5.3.1\npyzmq 19.0.2\nqtconsole 4.7.7\nQtPy 1.9.0\nqueuelib 1.5.0\nregex 2020.10.28\nrequests 2.24.0\nrequests-oauthlib 1.3.0\nrsa 4.4.1\ns3transfer 0.3.3\nsacremoses 0.0.43\nscapy 2.4.4\nscikit-learn 0.23.2\nscipy 1.5.2\nScrapy 2.4.0\nseaborn 0.11.0\nsemver 2.8.1\nSend2Trash 1.5.0\nsense2vec 0.6.0\nsentence-transformers 0.3.6\nsentencepiece 0.1.91\nservice-identity 18.1.0\nsetuptools 49.6.0.post20201009\nsix 1.15.0\nsklearn 0.0\nsmart-open 1.6.0\nsortedcontainers 2.2.2\nsoupsieve 2.0.1\nspacy 0.101.0\nsputnik 0.9.3\nstatsmodels 0.12.1\nsympy 1.6.2\ntensorboard 2.3.0\ntensorboard-plugin-wit 1.7.0\ntensorflow 2.2.0\ntensorflow-datasets 1.2.0\ntensorflow-estimator 2.2.0\ntensorflow-metadata 0.14.0\ntensorflow-probability 0.6.0\ntensorflowonspark 1.4.1\ntermcolor 1.1.0\nterminado 0.9.1\ntestpath 0.4.4\ntfp-nightly 0.5.0.dev20190522\nthinc 5.0.8\nthreadpoolctl 2.1.0\ntimeout-decorator 0.4.1\ntokenizers 0.9.2\ntoml 0.10.1\ntorch 1.7.0\ntorchaudio 0.7.0a0+ac17b64\ntorchvision 0.8.1\ntornado 6.1\ntqdm 4.51.0\ntraitlets 4.3.3\ntransformers 3.4.0 /home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg\nTwisted 20.3.0\ntwython 3.8.2\ntyped-ast 1.4.1\ntyping-extensions 3.7.4.3\numap-learn 0.4.6\nurllib3 1.25.11\nw3lib 1.22.0\nwcwidth 0.2.5\nwebencodings 0.5.1\nWerkzeug 1.0.1\nwheel 0.35.1\nwidgetsnbextension 3.5.1\nwordcloud 1.8.0\nwrapt 1.12.1\nyarl 1.6.2\nzipp 3.4.0\nzope.interface 5.1.2\n> On Nov 2, 2020, at 9:15 AM, Lysandre Debut <[email protected]> wrote:\n> \n> \n> It seems you have a conflict between your transformers version, as transformers-cli env returns v3.4.0, while your pip list returns v3.1.0?\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/8196#issuecomment-720607259>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAXWFWYIWSYOAK3B7CD7PRTSN3SM7ANCNFSM4TFJGMGQ>.\n> \n\n",
"After looking a bit into it, it seems there was the initialization of the XLMProphetNetTokenizer missing when the `sentencepiece` dependency was not detected. #8245 should solve it, thank you for raising an issue!",
"Great! Thank you.\n\nbtw - There are many missing packages when I try to run ‘pytest' for tests and examples.\nE.g. - datasets, timeout-decorator, faiss, parameterized, etc.\nIt would be nice if there was a requirements.txt file. (Just a suggestion).\n;-)\n\n> On Nov 2, 2020, at 10:58 AM, Lysandre Debut <[email protected]> wrote:\n> \n> \n> After looking a bit into it, it seems there was the initialization of the XLMProphetNetTokenizer missing when the sentencepiece dependency was not detected. #8245 <https://github.com/huggingface/transformers/pull/8245> should solve it, thank you for raising an issue!\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/8196#issuecomment-720662390>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAXWFWYOLUGKB3SN3IT3CH3SN36LVANCNFSM4TFJGMGQ>.\n> \n\n",
"For the tests, you should be able to get it working with `pip install transformers[testing]` or `pip install . [testing]` if you have cloned the repository.\r\n\r\nFor the examples, there is a `requirements.txt` file in the `examples/` directory:\r\n\r\n```shell-script\r\ncd examples\r\npip install -r requirements.txt\r\n```",
"Just merged #8245, installing from source should remove the error mentioned previously. Thanks again for letting us know!"
] | 1,604 | 1,604 | 1,604 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
```
ai) ubuntu@ip-10-0-1-82:~/transformers$ transformers-cli env
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "/home/ubuntu/anaconda2/envs/ai/bin/transformers-cli", line 33, in <module>
sys.exit(load_entry_point('transformers==3.4.0', 'console_scripts', 'transformers-cli')())
File "/home/ubuntu/anaconda2/envs/ai/bin/transformers-cli", line 25, in importlib_load_entry_point
return next(matches).load()
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/importlib_metadata/__init__.py", line 105, in load
module = import_module(match.group('module'))
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/__init__.py", line 135, in <module>
from .pipelines import (
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/pipelines.py", line 38, in <module>
from .tokenization_auto import AutoTokenizer
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/tokenization_auto.py", line 210, in <module>
(XLMProphetNetConfig, (XLMProphetNetTokenizer, None)),
NameError: name 'XLMProphetNetTokenizer' is not defined
- `transformers` version: 3.4.0
- Platform: Ubuntu 20.04 LTS
- Python version: 3.6.11
- PyTorch version (GPU?): 1.7.0 (no GPU)
- Tensorflow version (GPU?): 2.2.0 (no GPU)
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
```
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
tokenizers: @mfuntowicz
examples/distillation: @VictorSanh
-->
## Information
## To reproduce
Steps to reproduce the behavior:
1. RUN_SLOW=1 pytest examples
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8196/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8195 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8195/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8195/comments | https://api.github.com/repos/huggingface/transformers/issues/8195/events | https://github.com/huggingface/transformers/pull/8195 | 733,411,226 | MDExOlB1bGxSZXF1ZXN0NTEzMTk3MTU3 | 8,195 | Attempt at a temporary fix on `model_max_length` for roberta and Camembert variants | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"No, `model_max_length` is not defined in the `tokenizer.json` for these models so truncation is off, or fails at inference in the model.",
"Well we also load the `transformers` specific configuration with `from_pretrained`, not only the `tokenizers` file (because We have additional attributes in `transformers`). It should be in this configuration file. I’ll take a look.",
"Okay, I'll wait @thomwolf for your advice on this then.",
"Stale",
"Too old"
] | 1,604 | 1,651 | 1,631 | CONTRIBUTOR | null | - The issue is that this information is not contained in the
`tokenizer` config file.
- It used to be harcoded already (with 512 value too).
- It is unclear right now how to "properly" fix it.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #8117 (tentatively) (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8195/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8195",
"html_url": "https://github.com/huggingface/transformers/pull/8195",
"diff_url": "https://github.com/huggingface/transformers/pull/8195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8195.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8194 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8194/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8194/comments | https://api.github.com/repos/huggingface/transformers/issues/8194/events | https://github.com/huggingface/transformers/pull/8194 | 733,408,340 | MDExOlB1bGxSZXF1ZXN0NTEzMTk0ODM4 | 8,194 | [Seq2SeqTrainer] Move import to init to make file self-contained | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm okay with this, but users will still need to copy paste or rewrite `Seq2SeqTrainingArguments` and pass those as `args` instead of default `TrainingArguments`, since we assume `args` is `Seq2SeqTrainingArguments` in multiple methods \r\n\r\n"
] | 1,604 | 1,604 | 1,604 | MEMBER | null | # What does this PR do?
Seq2SeqTrainer can be used as an independent file if no `label_smoothing` is done. This PR move the import to the init to make it possible to simple download this file and use it as it is without any dependencies for standard seq2seq training.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Would be awesome if @patil-suraj and @stas00 could review as well :-)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8194/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8194",
"html_url": "https://github.com/huggingface/transformers/pull/8194",
"diff_url": "https://github.com/huggingface/transformers/pull/8194.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8194.patch",
"merged_at": 1604269915000
} |
https://api.github.com/repos/huggingface/transformers/issues/8193 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8193/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8193/comments | https://api.github.com/repos/huggingface/transformers/issues/8193/events | https://github.com/huggingface/transformers/pull/8193 | 733,381,683 | MDExOlB1bGxSZXF1ZXN0NTEzMTcxNTE5 | 8,193 | Fix two bugs with --logging_first_step | {
"login": "abisee",
"id": 14880223,
"node_id": "MDQ6VXNlcjE0ODgwMjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/14880223?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abisee",
"html_url": "https://github.com/abisee",
"followers_url": "https://api.github.com/users/abisee/followers",
"following_url": "https://api.github.com/users/abisee/following{/other_user}",
"gists_url": "https://api.github.com/users/abisee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abisee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abisee/subscriptions",
"organizations_url": "https://api.github.com/users/abisee/orgs",
"repos_url": "https://api.github.com/users/abisee/repos",
"events_url": "https://api.github.com/users/abisee/events{/privacy}",
"received_events_url": "https://api.github.com/users/abisee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for your PR! For the first point, I think we should fix the docs, not add an evaluation. It doesn't make any sense to evaluate at step 1 (one could call `trainer.evaluate()` before training if they really wanted to).\r\nFor the second point, good catch, this is certainly useful!",
"OK, both the description and the behavior is now logging only (no eval).",
"Perfect, thanks!"
] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes two bugs relating to the `--logging_first_step` flag:
1. Though the description for `--logging_first_step` says `"Log and eval the first global_step"`, the flag doesn't actually eval (it only logs). This PR makes sure that eval happens on the first step.
2. When `--logging_first_step` is on, the logged training loss for the first step is miscalculated in `Trainer._maybe_log_save_evaluate`:
```python
logs["loss"] = (tr_loss_scalar - self._logging_loss_scalar) / self.args.logging_steps
```
This divides the loss by `logging_steps` (which is typically large, e.g. 500), when it should be divided by 1. This PR makes sure that the loss is divided by the correct number of steps.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8193/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8193/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8193",
"html_url": "https://github.com/huggingface/transformers/pull/8193",
"diff_url": "https://github.com/huggingface/transformers/pull/8193.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8193.patch",
"merged_at": 1604090739000
} |
https://api.github.com/repos/huggingface/transformers/issues/8192 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8192/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8192/comments | https://api.github.com/repos/huggingface/transformers/issues/8192/events | https://github.com/huggingface/transformers/pull/8192 | 733,326,566 | MDExOlB1bGxSZXF1ZXN0NTEzMTI0MTUx | 8,192 | Add model cards. | {
"login": "mazicwong",
"id": 17029801,
"node_id": "MDQ6VXNlcjE3MDI5ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/17029801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mazicwong",
"html_url": "https://github.com/mazicwong",
"followers_url": "https://api.github.com/users/mazicwong/followers",
"following_url": "https://api.github.com/users/mazicwong/following{/other_user}",
"gists_url": "https://api.github.com/users/mazicwong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mazicwong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mazicwong/subscriptions",
"organizations_url": "https://api.github.com/users/mazicwong/orgs",
"repos_url": "https://api.github.com/users/mazicwong/repos",
"events_url": "https://api.github.com/users/mazicwong/events{/privacy}",
"received_events_url": "https://api.github.com/users/mazicwong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | Complete the author list in model cards for DynaBERT. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8192/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8192",
"html_url": "https://github.com/huggingface/transformers/pull/8192",
"diff_url": "https://github.com/huggingface/transformers/pull/8192.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8192.patch",
"merged_at": 1604294379000
} |
https://api.github.com/repos/huggingface/transformers/issues/8191 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8191/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8191/comments | https://api.github.com/repos/huggingface/transformers/issues/8191/events | https://github.com/huggingface/transformers/pull/8191 | 733,319,854 | MDExOlB1bGxSZXF1ZXN0NTEzMTE4Nzc2 | 8,191 | Patch 3 | {
"login": "mazicwong",
"id": 17029801,
"node_id": "MDQ6VXNlcjE3MDI5ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/17029801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mazicwong",
"html_url": "https://github.com/mazicwong",
"followers_url": "https://api.github.com/users/mazicwong/followers",
"following_url": "https://api.github.com/users/mazicwong/following{/other_user}",
"gists_url": "https://api.github.com/users/mazicwong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mazicwong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mazicwong/subscriptions",
"organizations_url": "https://api.github.com/users/mazicwong/orgs",
"repos_url": "https://api.github.com/users/mazicwong/repos",
"events_url": "https://api.github.com/users/mazicwong/events{/privacy}",
"received_events_url": "https://api.github.com/users/mazicwong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | complete author list in model cards | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8191/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8191",
"html_url": "https://github.com/huggingface/transformers/pull/8191",
"diff_url": "https://github.com/huggingface/transformers/pull/8191.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8191.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8190 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8190/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8190/comments | https://api.github.com/repos/huggingface/transformers/issues/8190/events | https://github.com/huggingface/transformers/issues/8190 | 733,302,430 | MDU6SXNzdWU3MzMzMDI0MzA= | 8,190 | TextDataset support for tensorflow? | {
"login": "mymusise",
"id": 6883957,
"node_id": "MDQ6VXNlcjY4ODM5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mymusise",
"html_url": "https://github.com/mymusise",
"followers_url": "https://api.github.com/users/mymusise/followers",
"following_url": "https://api.github.com/users/mymusise/following{/other_user}",
"gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mymusise/subscriptions",
"organizations_url": "https://api.github.com/users/mymusise/orgs",
"repos_url": "https://api.github.com/users/mymusise/repos",
"events_url": "https://api.github.com/users/mymusise/events{/privacy}",
"received_events_url": "https://api.github.com/users/mymusise/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"There is no plan for that as both those APIs will be deprecated soon. Users should directly use the [Datasets](https://github.com/huggingface/datasets) library which works for both PyTorch and TF. There are examples of how to replicate `TextDataset` and `LineByLineTextDataset` using that library in the new [`run_clm`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) and [`run_mlm`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) scripts. To convert the datasets to the TF format, just use their `set_format` method (see the [doc here](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.set_format)). ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,609 | 1,609 | CONTRIBUTOR | null | Hey, guys. I find [`TextDataset`](https://github.com/huggingface/transformers/blob/9a21b50614991889f11dbe0743af25923765f9e9/src/transformers/data/datasets/language_modeling.py#L20) and `LineByLineTextDataset` is a great design, it help people build input data more faster. But it's a pity that it only support **pytorch** now. Is there any possible to support **tensorflow**? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8190/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8189 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8189/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8189/comments | https://api.github.com/repos/huggingface/transformers/issues/8189/events | https://github.com/huggingface/transformers/pull/8189 | 733,290,389 | MDExOlB1bGxSZXF1ZXN0NTEzMDk1Nzcy | 8,189 | Doc fixes and filter warning in wandb | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | COLLABORATOR | null | # What does this PR do?
There is no `XxxForPreTrainingModel`, just `XxxForPretraining`, so fixing the docs strings in multiple files.
Also, as discussed on the comet side, there should be no warning when the ENV says wandb should not be used.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8189/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8189",
"html_url": "https://github.com/huggingface/transformers/pull/8189",
"diff_url": "https://github.com/huggingface/transformers/pull/8189.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8189.patch",
"merged_at": 1604075854000
} |
https://api.github.com/repos/huggingface/transformers/issues/8188 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8188/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8188/comments | https://api.github.com/repos/huggingface/transformers/issues/8188/events | https://github.com/huggingface/transformers/pull/8188 | 733,275,906 | MDExOlB1bGxSZXF1ZXN0NTEzMDg0NDMx | 8,188 | Finalize lm examples | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | COLLABORATOR | null | # What does this PR do?
This PR adds a `run_mlm_wwm` script for an example of MLM with whole word masking, which was the last kind of examples supporter by `run_language_modeling`.
As a result, it moves this script to `contrib/legacy/` and udpates the README to document how to use all the new example scripts.
I also reworked a tiny bit the table of tasks in the main README to include whether or not the examples leverage the Datasets library. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8188/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8188",
"html_url": "https://github.com/huggingface/transformers/pull/8188",
"diff_url": "https://github.com/huggingface/transformers/pull/8188.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8188.patch",
"merged_at": 1604082019000
} |
https://api.github.com/repos/huggingface/transformers/issues/8187 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8187/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8187/comments | https://api.github.com/repos/huggingface/transformers/issues/8187/events | https://github.com/huggingface/transformers/issues/8187 | 733,267,983 | MDU6SXNzdWU3MzMyNjc5ODM= | 8,187 | Configuration initialized from checkpoint does not keep the checkpoint identifier in its attributes | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
},
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Good point.\r\n\r\nMaybe what we could do is:\r\n- initializing the configuration with `from_pretrained` initializes the `name_or_path` attribute of the config as you mention, and\r\n- using the configuration in a *model* `from_pretrained` method override the `name_or_path` attribute with the one of the model so that it's in priority linked to the weights path.\r\n\r\nAnother option would be to have two attributes in the configuration:\r\n- `configuration_name_or_path`\r\n- `weights_name_or_path`\r\n\r\nrespectively populated by the config `from_pretrained` and the model `from_pretrained`. Maybe with a property linking to one in priority.\r\n\r\nbut I'm wondering if it's worth so many attributes... 🤔",
"I would go for the first version you proposed: having `name_or_path` for the configuration initialized if used alongside `from_pretrained`, which gets overridden by the model `from_pretrained`.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | MEMBER | null | Since version v3.4.0, initializing a model using the `from_pretrained` method adds a `name_or_path` attribute to the configuration, referencing the checkpoint used for initialization:
```py
from transformers import BertModel
model = BertModel.from_pretrained(model_name)
print(model.config.name_or_path)
# model_name
```
However, initializing the configuration on its own with the `from_pretrained` method does not yield the same attribute:
```py
from transformers import BertConfig
config = BertConfig.from_pretrained(model_name)
# config has no `name_or_path` attribute
```
This means that the configuration object initialized is not the same in both cases, whereas it probably should be. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8187/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8186 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8186/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8186/comments | https://api.github.com/repos/huggingface/transformers/issues/8186/events | https://github.com/huggingface/transformers/issues/8186 | 733,264,586 | MDU6SXNzdWU3MzMyNjQ1ODY= | 8,186 | T5 (probably BART) issues with the `tf.saved_model.save` API and the `output_xxx` configuration attributes. | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | MEMBER | null | The TensorFlow implementation of the T5 (and very probably the BART) model has an issue with using the tf.saved_model.save API alongside the `output_attentions=True` and the `output_hidden_states=True` configuration attributes.
The tests are skipped currently due to this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8186/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8185 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8185/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8185/comments | https://api.github.com/repos/huggingface/transformers/issues/8185/events | https://github.com/huggingface/transformers/issues/8185 | 733,262,213 | MDU6SXNzdWU3MzMyNjIyMTM= | 8,185 | TensorFlow Longformer model as a saved model with attention outputs | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Don't manage to get this test passing even with the new design of #7562 -> the problem to me is that the shape of `attentions` in Longformer depends on the input tensor => so not sure we'll find a good solution here",
"If it can't pass the test defined in the common tests, then the best would be to override the test in the `LongformerModelTester` and do a test to ensure that the correct behavior still works, even if not adhering to the common tests.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | MEMBER | null | The TensorFlow implementation of the Longformer model has an issue with using the `tf.saved_model.save` API alongside the `output_attentions=True` configuration attribute.
The test is skipped currently due to this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8185/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8184 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8184/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8184/comments | https://api.github.com/repos/huggingface/transformers/issues/8184/events | https://github.com/huggingface/transformers/issues/8184 | 733,208,390 | MDU6SXNzdWU3MzMyMDgzOTA= | 8,184 | trainer.evaluate returns 'epoch' from training | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can easily ignore that value though.\r\n\r\nThe problem is that you won't have it at each eval during the training loop if we don't include it. There could be something smarter there, but it would take time for something that is just purely cosmetic.",
"I added a small PR for improved documentation about this: #8273",
"Closing this since PR was merged."
] | 1,604 | 1,605 | 1,605 | CONTRIBUTOR | null | I am training a BERT model: `trainer.train()`
Then I call `evaluate_result = trainer.evaluate(labeled_dataset_test)`
The value of `evaluate_result` looks like this:
```python
{'eval_loss': 0.5908029079437256,
'eval_acc': 0.8282828282828283,
'eval_bac': 0.8243021346469622,
'eval_mcc': 0.7422526698197041,
'eval_f1_macro': 0.826792009400705,
'epoch': 3.0,
'total_flos': 1373653507542624}
```
IMO the dict should not contain `'epoch': 3.0,`. That is the number of epochs from training. It has nothing to do with evaluation... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8184/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8183 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8183/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8183/comments | https://api.github.com/repos/huggingface/transformers/issues/8183/events | https://github.com/huggingface/transformers/issues/8183 | 733,147,415 | MDU6SXNzdWU3MzMxNDc0MTU= | 8,183 | Summarization outputs on T5-small gets truncated | {
"login": "harung1993",
"id": 70214482,
"node_id": "MDQ6VXNlcjcwMjE0NDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/70214482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harung1993",
"html_url": "https://github.com/harung1993",
"followers_url": "https://api.github.com/users/harung1993/followers",
"following_url": "https://api.github.com/users/harung1993/following{/other_user}",
"gists_url": "https://api.github.com/users/harung1993/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harung1993/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harung1993/subscriptions",
"organizations_url": "https://api.github.com/users/harung1993/orgs",
"repos_url": "https://api.github.com/users/harung1993/repos",
"events_url": "https://api.github.com/users/harung1993/events{/privacy}",
"received_events_url": "https://api.github.com/users/harung1993/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @harung1993, \r\n\r\nsorry I'm having trouble understanding your question here. Also this seems like a question that should rather be posted in https://discuss.huggingface.co/ . We are trying to use github issues only for bug reports. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- --> I been fine tuning t5-small for my own dataset but every time I set a max_length it just truncates the output. For example my input statement is :
**When I first entered high school I was very nervous as it was a new school for me and it was a big adjustment. I was overwhelmed with work and mentally wasn't staying optimistic as I found it hard to manage my time and make friends. I felt like I wasn't good enough, and this caused me to treat myself like I wasn't worthy of being at such a place. In terms of behavior to others, I would say it made me more shy while still adapting to the new environment.**
and my output is as follows:
**when I first entered high school I was very nervous as it was a new school for me and it was a**
My generate is as follows:
**( input,
min_length= 0,
max_length=25,
length_penalty=2.0,
num_beams=4,
early_stopping=True )**
Is it possible for me to make it not truncate at the end? and also make it generate a reasonable summary ?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8183/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8182 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8182/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8182/comments | https://api.github.com/repos/huggingface/transformers/issues/8182/events | https://github.com/huggingface/transformers/issues/8182 | 733,136,812 | MDU6SXNzdWU3MzMxMzY4MTI= | 8,182 | cannot load pytorch_model.bin / pytorch version ? | {
"login": "woong97",
"id": 60849888,
"node_id": "MDQ6VXNlcjYwODQ5ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/60849888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/woong97",
"html_url": "https://github.com/woong97",
"followers_url": "https://api.github.com/users/woong97/followers",
"following_url": "https://api.github.com/users/woong97/following{/other_user}",
"gists_url": "https://api.github.com/users/woong97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/woong97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/woong97/subscriptions",
"organizations_url": "https://api.github.com/users/woong97/orgs",
"repos_url": "https://api.github.com/users/woong97/repos",
"events_url": "https://api.github.com/users/woong97/events{/privacy}",
"received_events_url": "https://api.github.com/users/woong97/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Could you provide the code you're using, as well as all the environment information?",
"@LysandreJik should we update the issue template for this last option `Questions & Help`?\r\n\r\nI feel like our first question to everybody is always `Could you provide the code you're using, as well as all the environment information`",
"You're right, we should!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
torch version 1.4.0
I execute run_language_modeling.py and save the model. However, when I load the saved model, "OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a Pytorch model from a TF 2.0 checkpoint, please set from_tf=True" occurs.
If I install torch==1.6.0, it is successful to load model. However, I have to use torch version 1.4.0 and torchvision 0.5.0. How can I load pytorch_model.bin in torch 1.4.0???
++ I tried to train run_language_modeling.py in torch version 1.4.0, but it cannot import "torch.optim.lr_scheduler" thus the train code cannot be executed.
Thus my question is
[1] How can I load pytorch_model.bin in torch version 1.4.0 / or
[2] How can I train run_language_modeling.py in torch version 1.4.0?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8182/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8181 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8181/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8181/comments | https://api.github.com/repos/huggingface/transformers/issues/8181/events | https://github.com/huggingface/transformers/issues/8181 | 733,119,624 | MDU6SXNzdWU3MzMxMTk2MjQ= | 8,181 | Documentation on how to get results out of trainer is missing. | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! Indeed, do you want to open a PR to fix this? Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | CONTRIBUTOR | null | Hi,
some time ago it was possible to get the results out of the trainer by `trainer.log_history`.
This now changed to `trainer.state.log_history`. But everything is not documented. I suggest to add documentation on how to get results out of the trainer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8181/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8180 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8180/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8180/comments | https://api.github.com/repos/huggingface/transformers/issues/8180/events | https://github.com/huggingface/transformers/pull/8180 | 733,095,406 | MDExOlB1bGxSZXF1ZXN0NTEyOTMwNDg2 | 8,180 | Fix the behaviour of DefaultArgumentHandler (removing it). | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sure before the change\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(task='fill-mask', model='bert-base-uncased')\r\npipe(\"I am a real [MASK]\", targets=[\"superhero\", \"legend\"])\r\n# [{'sequence': '[CLS] i am a real superhero [SEP]',\r\n# 'score': 1.21390044682812e-07,\r\n# 'token': 16251,\r\n# 'token_str': 'superhero'},\r\n# {'sequence': '[CLS] i am a real legend [SEP]',\r\n# 'score': 4.292454747201191e-08,\r\n# 'token': 5722,\r\n# 'token_str': 'legend'}]\r\n\r\npipe(\"I am a real [MASK]\", otherarg=True)\r\nValueError Traceback (most recent call last)\r\n<ipython-input-13-4784fa412984> in <module>\r\n----> 1 pipe(\"I am a real [MASK]\", otherarg=True)\r\n\r\n~/.pyenv/versions/3.8.5/lib/python3.8/site-packages/transformers/pipelines.py in __call__(self, targets, *args, **kwargs)\r\n 1201 - **token** (:obj:`str`) -- The predicted token (to replace the masked one).\r\n 1202 \"\"\"\r\n-> 1203 inputs = self._parse_and_tokenize(*args, **kwargs)\r\n 1204 outputs = self._forward(inputs, return_tensors=True)\r\n 1205 \r\n\r\n~/.pyenv/versions/3.8.5/lib/python3.8/site-packages/transformers/pipelines.py in _parse_and_tokenize(self, padding, add_special_tokens, *args, **kwargs)\r\n 625 \"\"\"\r\n 626 # Parse arguments\r\n--> 627 inputs = self._args_parser(*args, **kwargs)\r\n 628 inputs = self.tokenizer(\r\n 629 inputs,\r\n\r\n~/.pyenv/versions/3.8.5/lib/python3.8/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)\r\n 179 def __call__(self, *args, **kwargs):\r\n 180 if len(kwargs) > 0 and len(args) > 0:\r\n--> 181 raise ValueError(\"Pipeline cannot handle mixed args and kwargs\")\r\n 182 \r\n 183 if len(kwargs) > 0:\r\n\r\nValueError: Pipeline cannot handle mixed args and kwargs\r\n\r\n```\r\n\r\nAnd afterwards:\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(task='fill-mask', model='bert-base-uncased')\r\npipe(\"I am a real [MASK]\", otherarg=True)\r\n# [{'sequence': '[CLS] i am a real. [SEP]',\r\n# 'score': 0.94329434633255,\r\n# 'token': 1012,\r\n# 'token_str': '.'},\r\n# {'sequence': '[CLS] i am a real ; [SEP]',\r\n# 'score': 0.02879592962563038,\r\n# 'token': 1025,\r\n# 'token_str': ';'},\r\n# {'sequence': '[CLS] i am a real! [SEP]',\r\n# 'score': 0.022438935935497284,\r\n # 'token': 999,\r\n# 'token_str': '!'},\r\n# {'sequence': '[CLS] i am a real? [SEP]',\r\n# 'score': 0.00518036400899291,\r\n# 'token': 1029,\r\n# 'token_str': '?'},\r\n# {'sequence': '[CLS] i am a real... [SEP]',\r\n# 'score': 3.598905823309906e-05,\r\n# 'token': 2133,\r\n# 'token_str': '...'}]\r\n```",
"Should I merge ?",
"I could start that.",
"(no need to do it in this PR, it can wait :)"
] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
This PR will attempt to fix some clearly wrong error message in what I think should be
valid calls.
The decision to remove `DefaultArgumentHandler` comes from the fact that the real usage actually lied for only QuestionAnswering and ZeroShot which already have their own handler.
Having real python handle arguments and errors seems way more predictable and remove `*args` from function
signatures makes code more readable I think. We need to be very careful though as arguments number need to be in sync
otherwise errors can happen (This is due to mix of positional arguments, named positional arguments and generic keyword arguments being used together).
For the reader, the call order of functions is something like
```python
SpecificPipeline.__call__(myargument1, myargument2, **kwargs)
# Which calls
Pipeline.__call__(*args, **kwargs)
# Which in turn calls
SpecificPipeline._parse_and_tokenize(my_argument1, my_argument2, **kwargs)
```
Smaller changes for QoL where I tried to normalize inputs as early on as possible in the call stack (i.e SpecificPipeline.__call__) so we don't have to do it over and over.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@mfuntowicz
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8180/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8180",
"html_url": "https://github.com/huggingface/transformers/pull/8180",
"diff_url": "https://github.com/huggingface/transformers/pull/8180.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8180.patch",
"merged_at": 1604316830000
} |
https://api.github.com/repos/huggingface/transformers/issues/8179 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8179/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8179/comments | https://api.github.com/repos/huggingface/transformers/issues/8179/events | https://github.com/huggingface/transformers/issues/8179 | 733,092,078 | MDU6SXNzdWU3MzMwOTIwNzg= | 8,179 | `do_predict` option of `TrainingArguments` - but no way to pass test set. | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"The `do_predict` argument (like `do-train` and `do-eval` is not used by `Trainer`), just by the training scripts provided as examples.\r\n\r\nGetting predictions on a test set is done with `trainer.predict(test_dataset)`.",
"Should corresponding documentation be added?",
"Sure, do you want to take a stab at it?",
"@sgugger I can do a PR if you want. But...\r\n\r\n... for me it smells like a design flaw when this is only for CLI usage and has no meaning for the \"normal use\".\r\nShould we consider just removing it?\r\n\r\nHow would a documentation look like? _\"This field is just a workaround for CLI value storage for the example code and has no meaning for normal usage.\"_?",
"It's not `TrainerArguments` but `TrainingArguments`, so I don't see the problem with some of those arguments being only for CLI usage. Besides, removing them would break existing code so it would do more harm than good IMO.\r\n\r\nFor the documentation itself, it's not just a workaround. Something along the line of\r\n```\r\nThis argument is not directly used by :class:`~transformers.Trainer`, it's intended to be used by your training/evaluation scripts instead. See the `example scripts <https://github.com/huggingface/transformers/tree/master/examples>`___ for more details.\r\n```\r\nwould sound better.",
"> ```\r\n> This argument is not directly used by :class:`~transformers.Trainer`, it's intended to be used by your training/evaluation scripts instead. See the `example scripts <https://github.com/huggingface/transformers/tree/master/examples>`___ for more details.\r\n> ```\r\n> \r\n> would sound better.\r\n\r\nPR has been created: #8270",
"closing this since PR was merged"
] | 1,604 | 1,605 | 1,605 | CONTRIBUTOR | null | The `TrainingArguments` class has the option to pass `do_predict=True`. The doc sais: "Whether to run predictions on the test set or not."
But there is no way to pass a test set to the trainer. At least I can not find it in the documentation...
Can you please clarify / fix this?
Many thanks
Philip | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8179/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8178 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8178/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8178/comments | https://api.github.com/repos/huggingface/transformers/issues/8178/events | https://github.com/huggingface/transformers/pull/8178 | 733,056,092 | MDExOlB1bGxSZXF1ZXN0NTEyODk3NDQ3 | 8,178 | Minor style improvements for the Flax BERT and RoBERTa examples | {
"login": "avital",
"id": 37586,
"node_id": "MDQ6VXNlcjM3NTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/37586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avital",
"html_url": "https://github.com/avital",
"followers_url": "https://api.github.com/users/avital/followers",
"following_url": "https://api.github.com/users/avital/following{/other_user}",
"gists_url": "https://api.github.com/users/avital/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avital/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avital/subscriptions",
"organizations_url": "https://api.github.com/users/avital/orgs",
"repos_url": "https://api.github.com/users/avital/repos",
"events_url": "https://api.github.com/users/avital/events{/privacy}",
"received_events_url": "https://api.github.com/users/avital/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @LysandreJik @mfuntowicz ",
"Offline approval from @mfuntowicz!"
] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | 1. Use `@nn.compact` rather than `@compact` (as to not make it seem
like compact is a standard Python decorator.
2. Move attribute docstrings from two `__call__` methods to comments
on the attributes themselves. (This was probably a remnant from
the pre-Linen version where the attributes were arguments to
`call`.)
# What does this PR do?
Minor style improvements:
1. Use `@nn.compact` rather than `@compact` (as to not make it seem
like `compact` is a standard Python decorator.
2. Move attribute docstrings from two `__call__` methods to comments
on the attributes themselves. (This was probably a remnant from
the pre-Linen version where the attributes were arguments to
`call`.)
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? No. It's just adjusting the Flax example to the current best practices (I work on Flax)
- [x] Did you make sure to update the documentation with your changes? No doc changes.Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
It's not clear what the right pattern is for docstrings of dataclass attributes. I went with something pragmatic here. I couldn't find any online references for the "correct Pythonic pattern" here -- LMK if there's another form you prefer.
- [x] Did you write any new necessary tests? No new tests. Exists tests pass.
## Who can review? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8178/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8178",
"html_url": "https://github.com/huggingface/transformers/pull/8178",
"diff_url": "https://github.com/huggingface/transformers/pull/8178.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8178.patch",
"merged_at": 1604089540000
} |
https://api.github.com/repos/huggingface/transformers/issues/8177 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8177/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8177/comments | https://api.github.com/repos/huggingface/transformers/issues/8177/events | https://github.com/huggingface/transformers/issues/8177 | 733,026,147 | MDU6SXNzdWU3MzMwMjYxNDc= | 8,177 | AutoTokenizer.from_pretrained function cannot be customized | {
"login": "ismymajia",
"id": 17922949,
"node_id": "MDQ6VXNlcjE3OTIyOTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17922949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ismymajia",
"html_url": "https://github.com/ismymajia",
"followers_url": "https://api.github.com/users/ismymajia/followers",
"following_url": "https://api.github.com/users/ismymajia/following{/other_user}",
"gists_url": "https://api.github.com/users/ismymajia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ismymajia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ismymajia/subscriptions",
"organizations_url": "https://api.github.com/users/ismymajia/orgs",
"repos_url": "https://api.github.com/users/ismymajia/repos",
"events_url": "https://api.github.com/users/ismymajia/events{/privacy}",
"received_events_url": "https://api.github.com/users/ismymajia/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Could you provide an example of the usage that you would like to see with the `transformers` library, so that we may see what can be done?",
"Is this the same as #8125 ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,604 | 1,610 | 1,610 | NONE | null | A customized tokenizer has been provided in the tokenizer library, so that I can directly use the word segmentation data and vocab in the previous fairseq, but there is AutoTokenizer.from_pretrained function in the transformer, which cannot be customized like the tokenizer library, so I have no choice to directly use fairseq's vocab and word segmentation data in transformer.
What needs to be done? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8177/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8176 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8176/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8176/comments | https://api.github.com/repos/huggingface/transformers/issues/8176/events | https://github.com/huggingface/transformers/pull/8176 | 733,015,137 | MDExOlB1bGxSZXF1ZXN0NTEyODY0MzM2 | 8,176 | Fixing some warnings in DeBerta | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
Just fixes some simple warning coming from python from incorrect escapes in docstrings + `collections.abc` import.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8176/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8176",
"html_url": "https://github.com/huggingface/transformers/pull/8176",
"diff_url": "https://github.com/huggingface/transformers/pull/8176.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8176.patch",
"merged_at": 1604063742000
} |
https://api.github.com/repos/huggingface/transformers/issues/8175 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8175/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8175/comments | https://api.github.com/repos/huggingface/transformers/issues/8175/events | https://github.com/huggingface/transformers/issues/8175 | 733,001,646 | MDU6SXNzdWU3MzMwMDE2NDY= | 8,175 | Onnx converted model output shape not matching with the finetuned model (BUG) | {
"login": "user06039",
"id": 58213113,
"node_id": "MDQ6VXNlcjU4MjEzMTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/58213113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/user06039",
"html_url": "https://github.com/user06039",
"followers_url": "https://api.github.com/users/user06039/followers",
"following_url": "https://api.github.com/users/user06039/following{/other_user}",
"gists_url": "https://api.github.com/users/user06039/gists{/gist_id}",
"starred_url": "https://api.github.com/users/user06039/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/user06039/subscriptions",
"organizations_url": "https://api.github.com/users/user06039/orgs",
"repos_url": "https://api.github.com/users/user06039/repos",
"events_url": "https://api.github.com/users/user06039/events{/privacy}",
"received_events_url": "https://api.github.com/users/user06039/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It was working for me when using --pipeline sentiment-analysis",
"Hi @user06039 did you find a solution for this? Cause I am also facing the same issue."
] | 1,604 | 1,647 | 1,604 | NONE | null | I have trained a transformer 3 class classification model, the model used is distilbert-base-uncased.
Now, after training I tried to convert the model to onnx for faster inference using below script.
`
!python convert_graph_to_onnx.py --framework pt --model pt_line-distilbert --tokenizer distilbert-base-uncased --quantize onnx/line-distilbert.onnx`
```
====== Converting model to ONNX ======
ONNX opset version set to: 11
Loading pipeline (model: pt_line-distilbert, tokenizer: distilbert-base-uncased)
Creating folder /home/segments/onnx/linetype
Using framework PyTorch: 1.6.0
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
Ensuring inputs are in correct order
head_mask is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
/home/miniconda3/envs/reas/lib/python3.8/site-packages/transformers/modeling_utils.py:1645: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
input_tensor.shape == tensor_shape for input_tensor in input_tensors
====== Optimizing ONNX model ======
2020-10-30 02:50:55.673526328 [W:onnxruntime:, inference_session.cc:1143 Initialize] Serializing optimized model with Graph Optimization level greater than ORT_ENABLE_EXTENDED. The generated model may contain hardware and execution provider specific optimizations, and should only be used in the same environment the model was optimized for.
Optimized model has been written at /home/segments/onnx/line/line-distilbert-optimized.onnx: ✔
/!\ Optimized model contains hardware specific operators which might not be portable. /!\
As of onnxruntime 1.4.0, models larger than 2GB will fail to quantize due to protobuf constraint.
This limitation will be removed in the next release of onnxruntime.
Warning: onnxruntime.quantization.quantize is deprecated.
Please use quantize_static for static quantization, quantize_dynamic for dynamic quantization.
Quantized model has been written at /home/segments/onnx/line/line-distilbert-optimized-quantized.onnx: ✔
```
Now, when trying to do inference,
```
options = SessionOptions()
options.intra_op_num_threads = 1
options.execution_mode = ExecutionMode.ORT_SEQUENTIAL
model_path = "onnx/line/line-distilbert-optimized-quantized.onnx"
session = InferenceSession(model_path, options)
tokens = tokenizer.encode_plus("did you get it?", max_length=256, truncation=True, padding='max_length')
tokens = {name: np.atleast_2d(value) for name, value in tokens.items()}
sequence, = session.run(None, tokens)
sequence.shape (1, 256, 768)
```
but my model output should be (1, 3) # 3 class classification model
Any way to fix it? I have gone through this issue :- https://github.com/huggingface/transformers/issues/4825 but there's no proper solution mentioned there. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8175/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8174 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8174/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8174/comments | https://api.github.com/repos/huggingface/transformers/issues/8174/events | https://github.com/huggingface/transformers/issues/8174 | 732,972,745 | MDU6SXNzdWU3MzI5NzI3NDU= | 8,174 | Possible bug in "trainer" when training "BertForPretraining.from_pretrained()" | {
"login": "danaludwig",
"id": 6911685,
"node_id": "MDQ6VXNlcjY5MTE2ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6911685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danaludwig",
"html_url": "https://github.com/danaludwig",
"followers_url": "https://api.github.com/users/danaludwig/followers",
"following_url": "https://api.github.com/users/danaludwig/following{/other_user}",
"gists_url": "https://api.github.com/users/danaludwig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danaludwig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danaludwig/subscriptions",
"organizations_url": "https://api.github.com/users/danaludwig/orgs",
"repos_url": "https://api.github.com/users/danaludwig/repos",
"events_url": "https://api.github.com/users/danaludwig/events{/privacy}",
"received_events_url": "https://api.github.com/users/danaludwig/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Dana! I don't think this is a bug. You are not providing `BertForPreTraining` the `nsp_labels` it also requires for training, so it does not compute the loss (and then the rest fails). You should use `DataCollatorForNextSentencePrediction` to have the batches get those labels too (and it might requires using `TextDatasetForNextSentencePrediction` with it) or write your own `data_collator` that will add those labels.",
"Hi Sylvain! Thanks for the quick response! My understanding of the process for pre-training BERT is that it is self-supervised and creates it's own labels. For example, for \"next sentence prediction\", it looks at the input sentences and uses the \"next sentence\" as the label for that task. That's how it worked when I used the TensorFlow model to train my BERT from scratch. Does the HuggingFace trainer not do that? I will look at \"extDatasetForNextSentencePrediction\" to see if that has some answers. I just thought that HuggingFace framework would be easier to fine-tune than using Google TensorFlow code.",
"The trainer just does the training loop, it is independent from the tasks. Transformers provides tools to get the data together (which I mentioned) and ready for the Trainer on all the most common NLP tasks, BERT-pretraining objective included.",
"Hi Sylvain,\r\nYour suggestion did the trick! I used ‘TextDatasetForNextSentencePrediction’ to build my dataset and ‘DataCollatorForNextSentencePrediction’ for my collator. It’s training now and the validation loss is getting lower, so everything looks fine. As you remember, fine-tuning the baseline pre-trained model with new task-specific data was part of your workflow for ULMFIT, so I can imagine this use-case will come up a lot. If you would like me to clean up my test example notebook, I’d be glad to let you post it in your examples section. It took me days to get this far, so I’d like to save the next person some time if possible. \r\n\r\nThank you! Dana"
] | 1,604 | 1,604 | 1,604 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Environment is colab with GPU enabled. modules are provided in the Jupyter notebook on Google Drive here:
[https://colab.research.google.com/drive/1UX6NMXA2cHGUtDJwh_U6LL-kyd8Gyt9y?usp=sharing](https://colab.research.google.com/drive/1UX6NMXA2cHGUtDJwh_U6LL-kyd8Gyt9y?usp=sharing)
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using BERT
BertForPreTraining.from_pretrained("bert-base-uncased")
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: Error can be reproduced with the Notebook provided above.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
I am attempting to "fine-tune" bert-base-uncased by training it on additional sentences. I am unable to do this with Trainer - I get an error message shown in the notebook.
## To reproduce
Steps to reproduce the behavior:
1. Execute the notebook
2. First example succeeds with model BertLMHeadModel.from_pretrained("bert-base-uncased")
3. Second example fails at train() simply by changing the model to BertForPreTraining.from_pretrained("bert-base-uncased")
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The bug is entirely reproduced on the linked Jupyter Notebook above, when run on Google Colab. The error message is:
RuntimeError: grad can be implicitly created only for scalar outputs
## Expected behavior
The model "BertForPretraining.from_pretrained("bert-base-uncased") should train on the two sentences provided.
<!-- A clear and concise description of what you would expect to happen. -->
If you know a work-around for this bug, I will appreciate it.
Sylvain - good to see you doing interesting work!! - Dana Ludwig (student of fast.ai course and owner of your book)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8174/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8173 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8173/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8173/comments | https://api.github.com/repos/huggingface/transformers/issues/8173/events | https://github.com/huggingface/transformers/issues/8173 | 732,948,955 | MDU6SXNzdWU3MzI5NDg5NTU= | 8,173 | raining loss is not decreasing when using the Roberta pre-trained model from the transformers library | {
"login": "ZahraAbbasiantaeb",
"id": 25108522,
"node_id": "MDQ6VXNlcjI1MTA4NTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25108522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZahraAbbasiantaeb",
"html_url": "https://github.com/ZahraAbbasiantaeb",
"followers_url": "https://api.github.com/users/ZahraAbbasiantaeb/followers",
"following_url": "https://api.github.com/users/ZahraAbbasiantaeb/following{/other_user}",
"gists_url": "https://api.github.com/users/ZahraAbbasiantaeb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZahraAbbasiantaeb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZahraAbbasiantaeb/subscriptions",
"organizations_url": "https://api.github.com/users/ZahraAbbasiantaeb/orgs",
"repos_url": "https://api.github.com/users/ZahraAbbasiantaeb/repos",
"events_url": "https://api.github.com/users/ZahraAbbasiantaeb/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZahraAbbasiantaeb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Could you open a post on the [forum](https://discuss.huggingface.co) instead? We try to keep issues for bugs only.",
"Hi, sure."
] | 1,604 | 1,604 | 1,604 | NONE | null | I load the Roberta pre-trained model from the transformers library and use it for the sentence-pair classification task. The loss function used to decrease during the training per epoch until the last week, but now even though all of the parameters, including the batch size and the learning rate have the same value, when I fit my model the value of the loss function is not decreasing. I am a little bit confused and I have trained my model using various parameters and also I utilized another code in PyTorch, but still, the loss function is not decreasing. Can anyone help me to figure out the problem?
here is the link to my code:
https://colab.research.google.com/drive/1CFg41KDHJSJNkehJOHbp3gfXRdva60oW?usp=sharing
and the dataset:
https://drive.google.com/drive/folders/1CUH_z_HI31-yfj8hOmRfJBKRKe_BNkku
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8173/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8172 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8172/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8172/comments | https://api.github.com/repos/huggingface/transformers/issues/8172/events | https://github.com/huggingface/transformers/pull/8172 | 732,870,098 | MDExOlB1bGxSZXF1ZXN0NTEyNzQ3MzIx | 8,172 | Create Speedtest.py | {
"login": "nihirgupta",
"id": 72348470,
"node_id": "MDQ6VXNlcjcyMzQ4NDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/72348470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nihirgupta",
"html_url": "https://github.com/nihirgupta",
"followers_url": "https://api.github.com/users/nihirgupta/followers",
"following_url": "https://api.github.com/users/nihirgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/nihirgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nihirgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nihirgupta/subscriptions",
"organizations_url": "https://api.github.com/users/nihirgupta/orgs",
"repos_url": "https://api.github.com/users/nihirgupta/repos",
"events_url": "https://api.github.com/users/nihirgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/nihirgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, could you provide more information on what this is? The template is empty, and I'm not sure what this brings to the library.",
"Don't spend time on this @LysandreJik this is just HacktoberFest spam"
] | 1,604 | 1,604 | 1,604 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8172/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8172",
"html_url": "https://github.com/huggingface/transformers/pull/8172",
"diff_url": "https://github.com/huggingface/transformers/pull/8172.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8172.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8171 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8171/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8171/comments | https://api.github.com/repos/huggingface/transformers/issues/8171/events | https://github.com/huggingface/transformers/issues/8171 | 732,859,537 | MDU6SXNzdWU3MzI4NTk1Mzc= | 8,171 | Need suggestion on contributing TFDPR | {
"login": "ratthachat",
"id": 56621342,
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratthachat",
"html_url": "https://github.com/ratthachat",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hello! Thanks for offering to contribute the TF implementation of the DPR model! Something that may help you is to open a PR very early on, even if you have a lot of questions. This way we can help provide pointers, and we can guide you in the right direction. \r\n\r\nAnother aspect that may be of tremendous help, would be to follow the checklist when adding a new model. It is available [here](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model). If you open a PR, we recommend to put this checklist in the description so that everybody can follow better.\r\n\r\nLet me know if I can help further.",
"@LysandreJik Thanks for your suggestion and the checklist which is just what I want!\r\nI will try to follow the checklist as much as possible and then PR. (UPDATED : already open a PR with checklist)\r\nPlease let me know if I should close this issue.",
"This is great, only the tests are left! No need to close the issue here, we can close this issue once the PR is merged.",
"Thanks for your kind words @LysandreJik !\r\nAt first, I have no idea how to test. Now I know I have to translate `test_modeling_dpr.py` and see an example on the recent `test_modeling_tf_bart.py` . \r\n",
"@LysandreJik :D\r\nAfter several hours of testing and debugging, my current model is alreay passed 27 tests :D \r\nThe test run is in Colab here : (in the last cell)\r\nhttps://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing\r\n\r\nMy [current repo](https://github.com/ratthachat/transformers) already contained `test_modeling_tf_dpr.py` \r\nCould you please suggest me the next step (make a repo update with latest Transformers ?)",
"The next steps would be for us to review what you've contributed until now! We'll take a look as soon as possible.",
"Thanks Lysandre! I actually have aimed for TFRag . Meanwhile, I will make a new branch and use TFDPR on translating TFRag .",
"Close the issue as TFDPR is already merged. Very happy. Thanks a lot everybody!!"
] | 1,604 | 1,605 | 1,605 | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
Hi, I would love to try contributing TFDPR . This is the first time to me, so I need some suggestions.
I have followed @sshleifer 's [great PR on TFBart model](https://github.com/huggingface/transformers/commit/829842159efeb1f920cbbb1daf5ad67e0114d0b9) on 4 files :` __init__.py , convert_pytorch_checkpoint_to_tf2.py , utils/dummy_tf_objects.py` and (newly created) `modeling_tf_dpr.py `
Now the TF model works properly and can load Pytorch's weights successfully the same output as Pytorch's counterparts **except** small random noise (1e-5) which I suspect of some dtypes different , but I could not find the cause.
I guess I need to add document on docs/source/model_doc/dpr.rst , and that's all ?
**My question is do I need to change / fix any other files ? and/or do I need to do some other thing before making PR ?**
<!-- Important information -->
To resolve TF vs. Pytorch naming issues, there's one change regarding `TFBertModel` vs. `TFBertMainLayer` as [discussed here](https://discuss.huggingface.co/t/solved-issue-on-translating-dpr-to-tfdpr-on-loading-pytorch-weights-to-tf-model/1764) .
Thanks to @sshleifer for his help to solve the issue.
## Open source status
* [X] the model implementation is available: (give details)
You can see all the modified codes with test run at :
https://colab.research.google.com/drive/1lU4fx7zkr-Y3CXa3wmHIY8yJhKdiN3DI?usp=sharing
(to easily navigate the changes, please “find on page” for e.g. `TFDPRContextEncoder` )
* [X] the model weights are available: (give details)
At the moment, I use existing Pytorch weights, but will upload TF weights too.
* [X] who are the authors: (mention them, if possible by @gh-username)
@ratthachat | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8171/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8170 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8170/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8170/comments | https://api.github.com/repos/huggingface/transformers/issues/8170/events | https://github.com/huggingface/transformers/pull/8170 | 732,849,320 | MDExOlB1bGxSZXF1ZXN0NTEyNzMwMzg5 | 8,170 | Create README.md | {
"login": "kuppulur",
"id": 3698879,
"node_id": "MDQ6VXNlcjM2OTg4Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3698879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kuppulur",
"html_url": "https://github.com/kuppulur",
"followers_url": "https://api.github.com/users/kuppulur/followers",
"following_url": "https://api.github.com/users/kuppulur/following{/other_user}",
"gists_url": "https://api.github.com/users/kuppulur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kuppulur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuppulur/subscriptions",
"organizations_url": "https://api.github.com/users/kuppulur/orgs",
"repos_url": "https://api.github.com/users/kuppulur/repos",
"events_url": "https://api.github.com/users/kuppulur/events{/privacy}",
"received_events_url": "https://api.github.com/users/kuppulur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8170/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8170",
"html_url": "https://github.com/huggingface/transformers/pull/8170",
"diff_url": "https://github.com/huggingface/transformers/pull/8170.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8170.patch",
"merged_at": 1604651153000
} |
https://api.github.com/repos/huggingface/transformers/issues/8169 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8169/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8169/comments | https://api.github.com/repos/huggingface/transformers/issues/8169/events | https://github.com/huggingface/transformers/pull/8169 | 732,840,716 | MDExOlB1bGxSZXF1ZXN0NTEyNzIzMzY3 | 8,169 | Create README.md | {
"login": "kuppulur",
"id": 3698879,
"node_id": "MDQ6VXNlcjM2OTg4Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3698879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kuppulur",
"html_url": "https://github.com/kuppulur",
"followers_url": "https://api.github.com/users/kuppulur/followers",
"following_url": "https://api.github.com/users/kuppulur/following{/other_user}",
"gists_url": "https://api.github.com/users/kuppulur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kuppulur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuppulur/subscriptions",
"organizations_url": "https://api.github.com/users/kuppulur/orgs",
"repos_url": "https://api.github.com/users/kuppulur/repos",
"events_url": "https://api.github.com/users/kuppulur/events{/privacy}",
"received_events_url": "https://api.github.com/users/kuppulur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8169/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8169",
"html_url": "https://github.com/huggingface/transformers/pull/8169",
"diff_url": "https://github.com/huggingface/transformers/pull/8169.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8169.patch",
"merged_at": 1604651168000
} |
https://api.github.com/repos/huggingface/transformers/issues/8168 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8168/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8168/comments | https://api.github.com/repos/huggingface/transformers/issues/8168/events | https://github.com/huggingface/transformers/pull/8168 | 732,839,291 | MDExOlB1bGxSZXF1ZXN0NTEyNzIyMjMw | 8,168 | Create README.md | {
"login": "kuppulur",
"id": 3698879,
"node_id": "MDQ6VXNlcjM2OTg4Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3698879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kuppulur",
"html_url": "https://github.com/kuppulur",
"followers_url": "https://api.github.com/users/kuppulur/followers",
"following_url": "https://api.github.com/users/kuppulur/following{/other_user}",
"gists_url": "https://api.github.com/users/kuppulur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kuppulur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuppulur/subscriptions",
"organizations_url": "https://api.github.com/users/kuppulur/orgs",
"repos_url": "https://api.github.com/users/kuppulur/repos",
"events_url": "https://api.github.com/users/kuppulur/events{/privacy}",
"received_events_url": "https://api.github.com/users/kuppulur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8168/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8168",
"html_url": "https://github.com/huggingface/transformers/pull/8168",
"diff_url": "https://github.com/huggingface/transformers/pull/8168.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8168.patch",
"merged_at": 1604651134000
} |
https://api.github.com/repos/huggingface/transformers/issues/8167 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8167/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8167/comments | https://api.github.com/repos/huggingface/transformers/issues/8167/events | https://github.com/huggingface/transformers/pull/8167 | 732,828,961 | MDExOlB1bGxSZXF1ZXN0NTEyNzEzOTk1 | 8,167 | Create README.md | {
"login": "kuppulur",
"id": 3698879,
"node_id": "MDQ6VXNlcjM2OTg4Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3698879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kuppulur",
"html_url": "https://github.com/kuppulur",
"followers_url": "https://api.github.com/users/kuppulur/followers",
"following_url": "https://api.github.com/users/kuppulur/following{/other_user}",
"gists_url": "https://api.github.com/users/kuppulur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kuppulur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuppulur/subscriptions",
"organizations_url": "https://api.github.com/users/kuppulur/orgs",
"repos_url": "https://api.github.com/users/kuppulur/repos",
"events_url": "https://api.github.com/users/kuppulur/events{/privacy}",
"received_events_url": "https://api.github.com/users/kuppulur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks for sharing! You should add more metadata to your model card if possible: https://huggingface.co/docs#what-metadata-can-i-add-to-my-model-card"
] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | Telugu BERTU Readme file
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8167/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8167/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8167",
"html_url": "https://github.com/huggingface/transformers/pull/8167",
"diff_url": "https://github.com/huggingface/transformers/pull/8167.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8167.patch",
"merged_at": 1604651072000
} |
https://api.github.com/repos/huggingface/transformers/issues/8166 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8166/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8166/comments | https://api.github.com/repos/huggingface/transformers/issues/8166/events | https://github.com/huggingface/transformers/pull/8166 | 732,826,743 | MDExOlB1bGxSZXF1ZXN0NTEyNzEyMjE0 | 8,166 | Replace swish with silu | {
"login": "TFUsers",
"id": 25044281,
"node_id": "MDQ6VXNlcjI1MDQ0Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/25044281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TFUsers",
"html_url": "https://github.com/TFUsers",
"followers_url": "https://api.github.com/users/TFUsers/followers",
"following_url": "https://api.github.com/users/TFUsers/following{/other_user}",
"gists_url": "https://api.github.com/users/TFUsers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TFUsers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TFUsers/subscriptions",
"organizations_url": "https://api.github.com/users/TFUsers/orgs",
"repos_url": "https://api.github.com/users/TFUsers/repos",
"events_url": "https://api.github.com/users/TFUsers/events{/privacy}",
"received_events_url": "https://api.github.com/users/TFUsers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @TFUsers for this important PR.\r\n\r\nAs far as I know the activation names are also directly inside the the `config.json` files in the model hub. @sgugger @LysandreJik Do we plan to update all of them?",
"Very good point @jplu. We can't change those names in the config hosted online because it wouldn't be backward-compatible, so we need to still accept the old names (without documenting the behavior). So we should leave the old `swish` in the dictionaries `ACT2FN`.",
"It looks like everything passes (ignoring \"src/transformers/activations.py:52:5: F811 redefinition of unused 'silu' from line 40\").",
"> I guess the cleanest approach in that regard would be to remove the definition of ACT2FN in these files, and instead import the centralized ACT2FN from the activations files\r\n\r\nLine 30 of `modeling_bert.py` is\r\n`from .activations import ACT2FN`\r\n\r\nAre the main concerns resolved?",
"You're right, I was checking in the wrong file. Could you fix the code quality issue related to the redefinition of `silu`? You can follow what's done with the `gelu` method, by renaming the `silu` method to `_silu_python` and doing an if/else statement according to the torch version.\r\n\r\nAlso that version check (same with the `gelu`) doesn't seem robust at all. Could we use the `packaging` util to do something better? Something like:\r\n\r\n```py\r\nfrom packaging import version\r\n\r\nif version.parse(torch.__version__) < version.parse(\"1.4\"):\r\n ...\r\n```",
"Thanks!"
] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | This pull request replaces the swish with the silu. Note "silu" still maps to tf.keras.activations.swish not tf.keras.activations.silu for tensorflow since the silu is in the tensorflow nightlies, but not in the stable version of tensorflow.
This fixes https://github.com/huggingface/transformers/issues/8100
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8166/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8166",
"html_url": "https://github.com/huggingface/transformers/pull/8166",
"diff_url": "https://github.com/huggingface/transformers/pull/8166.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8166.patch",
"merged_at": 1604084951000
} |
https://api.github.com/repos/huggingface/transformers/issues/8165 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8165/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8165/comments | https://api.github.com/repos/huggingface/transformers/issues/8165/events | https://github.com/huggingface/transformers/pull/8165 | 732,826,008 | MDExOlB1bGxSZXF1ZXN0NTEyNzExNjIw | 8,165 | Fix typo: s/languaged/language/ | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8165/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8165",
"html_url": "https://github.com/huggingface/transformers/pull/8165",
"diff_url": "https://github.com/huggingface/transformers/pull/8165.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8165.patch",
"merged_at": 1604071324000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8164 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8164/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8164/comments | https://api.github.com/repos/huggingface/transformers/issues/8164/events | https://github.com/huggingface/transformers/pull/8164 | 732,721,476 | MDExOlB1bGxSZXF1ZXN0NTEyNjE4Nzgz | 8,164 | [s2s] Option to aggregate rouge deterministically | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,604 | 1,614 | 1,614 | CONTRIBUTOR | null | Optionally take randomness/sampling out of calculate_rouge_score.
Not breaking, the default is unchanged. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8164/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8164",
"html_url": "https://github.com/huggingface/transformers/pull/8164",
"diff_url": "https://github.com/huggingface/transformers/pull/8164.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8164.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8163 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8163/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8163/comments | https://api.github.com/repos/huggingface/transformers/issues/8163/events | https://github.com/huggingface/transformers/pull/8163 | 732,707,755 | MDExOlB1bGxSZXF1ZXN0NTEyNjA3MTI5 | 8,163 | [CI] Better reports #2 | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!",
"OK, everything seems to be working well. Let me know if you have any comments/suggestions/recommendations before replicating this to the rest of the jobs.\r\n\r\nSee: https://github.com/huggingface/transformers/runs/1329578690?check_suite_focus=true \r\n\r\nI will wait for https://github.com/huggingface/transformers/pull/8007 to be merged before spreading the love to the rest of the jobs, so that they won't need to deal with a lot of conflicts.",
"I also proposed this a `pytest` feature: https://github.com/pytest-dev/pytest/issues/7972 - probably others would benefit from it.\r\n"
] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | As discussed at https://github.com/huggingface/transformers/pull/8110 this PR:
* [x] - generates 3 types of failure reports - long, short and one-per-line
* [x] - fixes multiple test suite tasks in a single job to allow them all to run regardless of the outcome of the previous test suites (using `if: always()`
* [x] - adds a workaround for the cumbersome way github makes the artifacts available by printing the short failure report in its own tab, so getting to errors should be very easy now.
Once we perfect this hack to our liking, I intend to submit this to `pytest` and see if perhaps they would consider accepting it as a feature.
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8163/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8163",
"html_url": "https://github.com/huggingface/transformers/pull/8163",
"diff_url": "https://github.com/huggingface/transformers/pull/8163.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8163.patch",
"merged_at": 1604014205000
} |
https://api.github.com/repos/huggingface/transformers/issues/8162 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8162/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8162/comments | https://api.github.com/repos/huggingface/transformers/issues/8162/events | https://github.com/huggingface/transformers/pull/8162 | 732,680,841 | MDExOlB1bGxSZXF1ZXN0NTEyNTg0NjMw | 8,162 | Fix typo: s/Chinees/Chinese/ | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh, it was already fixed in #8159"
] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8162/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8162",
"html_url": "https://github.com/huggingface/transformers/pull/8162",
"diff_url": "https://github.com/huggingface/transformers/pull/8162.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8162.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8161/comments | https://api.github.com/repos/huggingface/transformers/issues/8161/events | https://github.com/huggingface/transformers/issues/8161 | 732,680,098 | MDU6SXNzdWU3MzI2ODAwOTg= | 8,161 | generate() always starts with bos_token_id | {
"login": "j-min",
"id": 18069263,
"node_id": "MDQ6VXNlcjE4MDY5MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/18069263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j-min",
"html_url": "https://github.com/j-min",
"followers_url": "https://api.github.com/users/j-min/followers",
"following_url": "https://api.github.com/users/j-min/following{/other_user}",
"gists_url": "https://api.github.com/users/j-min/gists{/gist_id}",
"starred_url": "https://api.github.com/users/j-min/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/j-min/subscriptions",
"organizations_url": "https://api.github.com/users/j-min/orgs",
"repos_url": "https://api.github.com/users/j-min/repos",
"events_url": "https://api.github.com/users/j-min/events{/privacy}",
"received_events_url": "https://api.github.com/users/j-min/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @j-min, I don't think we will change this behavior because a) huge backward breaking change b) I think it's important to understand that generate **has** to start with a BOS/decoder_start_token_id (see Encoder-Decoder blog post: https://huggingface.co/blog/encoder-decoder)\r\n\r\nAlso you could add `skip_special_tokens=True` to the decode method to not return this token"
] | 1,604 | 1,604 | 1,604 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Ubuntu 16.04
- Python version: 3.7
- PyTorch version (GPU?): 1.6 GPU
- Tensorflow version (GPU?): Doesn't matter
- Using GPU in script?: Doesn't matter
- Using distributed or parallel set-up in script?: Doesn't matter
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
- TextGeneration: @TevenLeScao
- T5: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* the official example scripts: [T5ForConditionalGeneration Doc](https://huggingface.co/transformers/model_doc/t5.html?highlight=t5#transformers.T5ForConditionalGeneration)
## To reproduce
Steps to reproduce the behavior:
```python3
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small', return_dict=True)
input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids
outputs = model(input_ids=input_ids, labels=labels)
model.generate(input_ids)[0]
>>> tensor([ 0, 32099, 2447, 704, 32098, 8, 32097, 2447, 5, 1])
# <- start with 0 = pad_token_id = decoder_start_token_id of T5
input_ids = tokenizer("summarize: studies have shown that owning a dog is good for you ", return_tensors="pt").input_ids # Batch size 1
outputs = model.generate(input_ids)
outputs
>>> tensor([[ 0, 2116, 43, 2008, 24, 293, 53, 3, 9, 1782, 19, 207,
21, 25, 3, 5, 1]])
# <- start with 0 = pad_token_id = decoder_start_token_id of T5
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Generation outputs should not start with 0 (= pad_token_id = decoder_start_token_id of T5)
```python3
>>> tensor([ 32099, 2447, 704, 32098, 8, 32097, 2447, 5, 1])
>>> tensor([[ 2116, 43, 2008, 24, 293, 53, 3, 9, 1782, 19, 207,
21, 25, 3, 5, 1]])
```
<!-- A clear and concise description of what you would expect to happen. -->
## Analysis / Suggestion
This happens because the `input_ids` are initialized with [bos_token_id](https://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/generation_utils.py#L329) or [decoder_start_token_id](https://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/generation_utils.py#L432) then iteratively updated during `.generate()`
But should `.generate()` return the first token? It is confusing and makes it hard to debug since `tokenizer.decode()` hides this behavior.
It would be better to exclude the first token and just return `output [:, 1:]` in [the last line of generate()](https://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/generation_utils.py#L512). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8161/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/8161/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8160/comments | https://api.github.com/repos/huggingface/transformers/issues/8160/events | https://github.com/huggingface/transformers/issues/8160 | 732,670,103 | MDU6SXNzdWU3MzI2NzAxMDM= | 8,160 | ConnectionError: ('Connection aborted.', OSError("(32, 'EPIPE')")) | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @julien-c ",
"We've seen similar issues in the past @danyaljj. \r\n\r\nWe are going to release a new upload system for models in the coming week, can this wait till then? If it can't, and if you can upload your model to another bucket, we can copy it over manually. Let us know.",
"Hey, @julien-c 👋 Sounds fair, I think I can wait until your new system is rolled out. "
] | 1,604 | 1,605 | 1,605 | CONTRIBUTOR | null | Getting this error uploading a T5-3b model (~5 GB) to the model-hub.
I don't think it's my connection; I didn't have any issues with other smaller models, except this one.
Any thoughts on what could be the issue?
```
$ transformers-cli upload unifiedqa-t5-3b --organization allenai
About to upload file /Users/danielk/ideaProjects/t2t-qa/experiments/upload_models/3b/unifiedqa-t5-3b/tokenizer_config.json to S3 under filename unifiedqa-t5-3b/tokenizer_config.json and namespace allenai
About to upload file /Users/danielk/ideaProjects/t2t-qa/experiments/upload_models/3b/unifiedqa-t5-3b/special_tokens_map.json to S3 under filename unifiedqa-t5-3b/special_tokens_map.json and namespace allenai
About to upload file /Users/danielk/ideaProjects/t2t-qa/experiments/upload_models/3b/unifiedqa-t5-3b/config.json to S3 under filename unifiedqa-t5-3b/config.json and namespace allenai
About to upload file /Users/danielk/ideaProjects/t2t-qa/experiments/upload_models/3b/unifiedqa-t5-3b/spiece.model to S3 under filename unifiedqa-t5-3b/spiece.model and namespace allenai
About to upload file /Users/danielk/ideaProjects/t2t-qa/experiments/upload_models/3b/unifiedqa-t5-3b/pytorch_model.bin to S3 under filename unifiedqa-t5-3b/pytorch_model.bin and namespace allenai
Proceed? [Y/n] y
Uploading... This might take a while if files are large
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/allenai/unifiedqa-t5-3b/tokenizer_config.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/allenai/unifiedqa-t5-3b/special_tokens_map.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/allenai/unifiedqa-t5-3b/config.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/allenai/unifiedqa-t5-3b/spiece.model
0%| | 9756672/11406640119 [00:02<1:41:40, 1868312.30it/s]Traceback (most recent call last):
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 331, in _send_until_done
return self.connection.send(data)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1737, in send
self._raise_ssl_error(self._ssl, result)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1639, in _raise_ssl_error
raise SysCallError(errno, errorcode.get(errno))
OpenSSL.SSL.SysCallError: (32, 'EPIPE')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 603, in urlopen
chunked=chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 355, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1244, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1290, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1239, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1065, in _send_output
self.send(chunk)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 987, in send
self.sock.sendall(data)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 342, in sendall
sent = self._send_until_done(data[total_sent:total_sent + SSL_WRITE_BLOCKSIZE])
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 337, in _send_until_done
raise SocketError(str(e))
OSError: (32, 'EPIPE')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 641, in urlopen
_stacktrace=sys.exc_info()[2])
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/util/retry.py", line 368, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/packages/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 603, in urlopen
chunked=chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 355, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1244, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1290, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1239, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1065, in _send_output
self.send(chunk)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 987, in send
self.sock.sendall(data)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 342, in sendall
sent = self._send_until_done(data[total_sent:total_sent + SSL_WRITE_BLOCKSIZE])
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 337, in _send_until_done
raise SocketError(str(e))
urllib3.exceptions.ProtocolError: ('Connection aborted.', OSError("(32, 'EPIPE')"))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/danielk/opt/anaconda3/bin/transformers-cli", line 10, in <module>
sys.exit(main())
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/transformers/commands/transformers_cli.py", line 33, in main
service.run()
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/transformers/commands/user.py", line 234, in run
token=token, filename=filename, filepath=filepath, organization=self.args.organization
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/transformers/hf_api.py", line 168, in presign_and_upload
r = requests.put(urls.write, data=data, headers={"content-type": urls.type})
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/api.py", line 131, in put
return request('put', url, data=data, **kwargs)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError("(32, 'EPIPE')"))
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8160/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8159/comments | https://api.github.com/repos/huggingface/transformers/issues/8159/events | https://github.com/huggingface/transformers/pull/8159 | 732,631,894 | MDExOlB1bGxSZXF1ZXN0NTEyNTQzMjY2 | 8,159 | Fix typo: indinces -> indices | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Looks good, thanks! Just make sure to run `make style` to have our scripts automatically fix the files you changed.\r\n\r\nOh. So should I send a new patch with the changes after running `make style`?",
"I'm not following. Your last commit had the styling changes so all is good.",
"> I'm not following. Your last commit had the styling changes so all is good.\r\n\r\nOh. But if I run `make style` I see tons of changes (193 files changed, 754 insertions(+), 2969 deletions(-)). For example, this file:\r\n\r\n```diff\r\ndiff --git a/examples/adversarial/utils_hans.py b/examples/adversarial/utils_hans.py\r\nindex bf0623ff..17d4a8c4 100644\r\n--- a/examples/adversarial/utils_hans.py\r\n+++ b/examples/adversarial/utils_hans.py\r\n@@ -112,10 +112,7 @@ if is_torch_available():\r\n cached_features_file = os.path.join(\r\n data_dir,\r\n \"cached_{}_{}_{}_{}\".format(\r\n- \"dev\" if evaluate else \"train\",\r\n- tokenizer.__class__.__name__,\r\n- str(max_seq_length),\r\n- task,\r\n+ \"dev\" if evaluate else \"train\", tokenizer.__class__.__name__, str(max_seq_length), task,\r\n ),\r\n )\r\n label_list = processor.get_labels()\r\n@@ -281,10 +278,7 @@ class HansProcessor(DataProcessor):\r\n \r\n \r\n def hans_convert_examples_to_features(\r\n- examples: List[InputExample],\r\n- label_list: List[str],\r\n- max_length: int,\r\n- tokenizer: PreTrainedTokenizer,\r\n+ examples: List[InputExample], label_list: List[str], max_length: int, tokenizer: PreTrainedTokenizer,\r\n ):\r\n \"\"\"\r\n Loads a data file into a list of ``InputFeatures``\r\n```",
"Are you sure you have proper versions of black/isort/flake8 ? Run `pip install -e .[dev]` in the repo to make sure you have them.\r\n",
"Oh, yeah, it was that. Silly mistake. :hand: Sorry for the noise!",
"Np!"
] | 1,604 | 1,604 | 1,604 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8159/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8159",
"html_url": "https://github.com/huggingface/transformers/pull/8159",
"diff_url": "https://github.com/huggingface/transformers/pull/8159.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8159.patch",
"merged_at": 1604005460000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8158/comments | https://api.github.com/repos/huggingface/transformers/issues/8158/events | https://github.com/huggingface/transformers/issues/8158 | 732,563,139 | MDU6SXNzdWU3MzI1NjMxMzk= | 8,158 | EncoderDecoderModel: tie weights between different classes of models | {
"login": "alexyalunin",
"id": 23011284,
"node_id": "MDQ6VXNlcjIzMDExMjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/23011284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexyalunin",
"html_url": "https://github.com/alexyalunin",
"followers_url": "https://api.github.com/users/alexyalunin/followers",
"following_url": "https://api.github.com/users/alexyalunin/following{/other_user}",
"gists_url": "https://api.github.com/users/alexyalunin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexyalunin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexyalunin/subscriptions",
"organizations_url": "https://api.github.com/users/alexyalunin/orgs",
"repos_url": "https://api.github.com/users/alexyalunin/repos",
"events_url": "https://api.github.com/users/alexyalunin/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexyalunin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yeah I think you have a good point here! Was discussing this with @ibeltagy and I think we should add a `_tie_encoder_decoder_word_embeddings(...)` that does exactly what you suggested. We should probably run this method when initializing an Encoder-Decoder model and if word embeddings are of the same size. We can provide a `tie_encoder_decoder_word_embeds` config params that defaults to True. \r\n\r\n@alexyalunin do you want to try to make a PR for this ? :-)",
"> Yeah I think you have a good point here! Was discussing this with @ibeltagy and I think we should add a `_tie_encoder_decoder_word_embeddings(...)` that does exactly what you suggested. We should probably run this method when initializing an Encoder-Decoder model and if word embeddings are of the same size. We can provide a `tie_encoder_decoder_word_embeds` config params that defaults to True.\r\n> \r\n> @alexyalunin do you want to try to make a PR for this ? :-)\r\n\r\nOk, let me try this. I will put you in the reviewers. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | # 🚀 Feature request
Tie weights between different classes of models, tie embedding matrices, update tutorial.
## Motivation
I have been following the Longformer2Roberta tutorial https://github.com/huggingface/transformers/blob/master/model_cards/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16/README.md and it seems like the crucial part of tying weights is missing.
The EncoderDecoderModel is initialized with "allenai/longformer-base-4096" and "roberta-base", and "allenai/longformer-base-4096" in its turn was initialized from "roberta-base". It seems natural to be able to tie their attention and FFNN weights, although it might be problematic to deal with positional embedding. Anyway, I think one feature that definitely should be implemented is the tying embedding matrices.
## Your contribution
For now I solve the issue with
```
model.encoder.embeddings.word_embeddings.weight = model.decoder.roberta.embeddings.word_embeddings.weight
```
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8158/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8157/comments | https://api.github.com/repos/huggingface/transformers/issues/8157/events | https://github.com/huggingface/transformers/pull/8157 | 732,535,199 | MDExOlB1bGxSZXF1ZXN0NTEyNDYwMzkz | 8,157 | [testing] distributed: correct subprocess output checking | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR fixes an issue revealed on CI - https://github.com/huggingface/transformers/runs/1327577422
* the external subprocess runner will now be more flexible and check `stdout|stderr` to validate that the subprocess sent at least some output. Currently the code checks only `stdout` which isn't right since the subprocess may not generate any.
* adds `stdout:` prefix to subprocess' stdout, like it was already doing for `stderr`.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8157/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8157",
"html_url": "https://github.com/huggingface/transformers/pull/8157",
"diff_url": "https://github.com/huggingface/transformers/pull/8157.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8157.patch",
"merged_at": 1603994725000
} |
https://api.github.com/repos/huggingface/transformers/issues/8156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8156/comments | https://api.github.com/repos/huggingface/transformers/issues/8156/events | https://github.com/huggingface/transformers/issues/8156 | 732,526,024 | MDU6SXNzdWU3MzI1MjYwMjQ= | 8,156 | BertTokenizer loses unicode character | {
"login": "arvieFrydenlund",
"id": 1606458,
"node_id": "MDQ6VXNlcjE2MDY0NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1606458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arvieFrydenlund",
"html_url": "https://github.com/arvieFrydenlund",
"followers_url": "https://api.github.com/users/arvieFrydenlund/followers",
"following_url": "https://api.github.com/users/arvieFrydenlund/following{/other_user}",
"gists_url": "https://api.github.com/users/arvieFrydenlund/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arvieFrydenlund/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arvieFrydenlund/subscriptions",
"organizations_url": "https://api.github.com/users/arvieFrydenlund/orgs",
"repos_url": "https://api.github.com/users/arvieFrydenlund/repos",
"events_url": "https://api.github.com/users/arvieFrydenlund/events{/privacy}",
"received_events_url": "https://api.github.com/users/arvieFrydenlund/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I'm having a similar problem since upgrading to 4.0 where certain unicode characters are being \"eaten\" even though I set 'use_fast' = False.\r\n\r\n## Environment info\r\n\r\n- `transformers` version: 4.0.0\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.8.6\r\n- PyTorch version (GPU?): 1.7.0 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n \r\n### Who can help\r\n@mfuntowicz\r\n\r\n## Information\r\n\r\nBertTokenizer\r\n\r\nThe problem comes from certain unicode character being not turned correctly into UNK characters by the tokenizers when they did in earlier versions.\r\n\r\n## To reproduce\r\n\r\n```\r\nimport transformers\r\nimport torch\r\nfrom transformers import BertTokenizer\r\n\r\nprint(torch.__version__)\r\nprint(transformers.__version__)\r\n\r\n# THERE IS A UNICODE character between the , and '' (specifically \\U+200D\\U+200D\\U+200D\\U+200D)\r\nsentence = \": ءُپَاعَر سِيْا , '' Upal\"\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\r\n \"bert-base-cased\",\r\n do_lower_case=False,\r\n use_fast=False\r\n )\r\n\r\nprint(tokenizer.tokenize(sentence))\r\n\r\n# output 4.0.0\r\n\r\n[':', '[UNK]', '[UNK]', ',', \"'\", \"'\", 'Up', '##al']\r\n\r\n# output from 3.0.1\r\n\r\n[':', '[UNK]', '[UNK]', ',', '[UNK]', \"'\", \"'\", 'Up', '##al'\r\n```",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,603 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-109-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
The tokenizer seems to lose a specific unicode character on tokenization. From the sentence
`General Saprang Kalayanamitr ( Thai : <unk> ่ ง กัลยาณมิตร ;`
from line 2557 of the Wiki02 training dataset, there is a little dot after and above `<unk>`
however the tokenizer produces
` ['general', 'sap', '##rang', 'kala', '##yana', '##mit', '##r', '(', 'thai', ':', '<', 'un', '##k', '>', 'ง', '[UNK]', ';']`
## To reproduce
`t = BertTokenizer.from_pretrained('bert-base-uncased')`
`o = t.tokenize('General Saprang Kalayanamitr ( Thai : <unk> ่ ง กัลยาณมิตร ;')`
`o`
`['general', 'sap', '##rang', 'kala', '##yana', '##mit', '##r', '(', 'thai', ':', '<', 'un', '##k', '>', 'ง', '[UNK]', ';']`
## Expected behavior
A subword should be given for the ' ่ ' token. Otherwise there should be an option to be given a warning for removed characters.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8156/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8155/comments | https://api.github.com/repos/huggingface/transformers/issues/8155/events | https://github.com/huggingface/transformers/issues/8155 | 732,491,539 | MDU6SXNzdWU3MzI0OTE1Mzk= | 8,155 | ONNX T5 with Beam Search | {
"login": "amanpreet692",
"id": 42522643,
"node_id": "MDQ6VXNlcjQyNTIyNjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/42522643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amanpreet692",
"html_url": "https://github.com/amanpreet692",
"followers_url": "https://api.github.com/users/amanpreet692/followers",
"following_url": "https://api.github.com/users/amanpreet692/following{/other_user}",
"gists_url": "https://api.github.com/users/amanpreet692/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amanpreet692/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanpreet692/subscriptions",
"organizations_url": "https://api.github.com/users/amanpreet692/orgs",
"repos_url": "https://api.github.com/users/amanpreet692/repos",
"events_url": "https://api.github.com/users/amanpreet692/events{/privacy}",
"received_events_url": "https://api.github.com/users/amanpreet692/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @amanpreet692 ,\r\nI'm not sure what this error means, but I've `T5` `onnx` version ready which is compatible with `generate` method.\r\nTo be able to use cache I exported the `encoder` and `lm_head` to `onnx` and kept the `decoder` in `torch`. This is bit hacky but still gives 1.4-1.6x speed-up for beam search, I'll be sharing it soon.",
"Yep, Even I was able to do that but since majority of the time is taken while decoding I wanted to convert decoder as well! Will keep trying for now.",
"@patil-suraj A question, somehow converting both lm-head and encoder is giving me worse result as compared to only converting the encoder. Did you try any additional optimizations like quantization?",
"No, I didn't try quantization with T5, so far I'm getting good enough speed-up and results are same as that of torch.\r\n\r\nNot related to your `onnx` question, but you could also distill the models, to get additional speed-ups with minimal perf drop. Sam has just relased amazing s2s distillation [paper](https://arxiv.org/pdf/2010.13002.pdf). See if that helps you with speeding-up inference.",
"Hey @amanpreet692!\r\nThanks a lot for looking at making the ONNX version compatible with beam search. Could you send over your full script to make it easier to debug? Happy to hop on a call this week and hear a bit more what you have in mind. The two decoders solution sounds interesting!",
"I've posted the script on the [forum ](https://discuss.huggingface.co/t/speeding-up-t5-inference/1841).",
"@abelriboulot Thanks a lot for getting back :)\r\nHere are the scripts for my work (The first two are changes on top of your code and the third is my custom model with 2 decoders):\r\n1) [huggingface_utilities.py](https://gist.github.com/amanpreet692/41dba767220b5b1a6417066197781328) : Additional changes to include past states as input and output and convert 3 components (2 decoders, 1 encoder) into onnx format.\r\n2) [models.py](https://gist.github.com/amanpreet692/d36af959e0d8d9cf84b19ff26d9b19d8) : Smallish change to include a new class CombinedDecoderNoPast \r\n3) [t5_onnx_model.py](https://gist.github.com/amanpreet692/a8bf2d45a8f368830f3838790461d26b) : Complete T5 model that works with beam search, major changes in decoder processing.\r\n\r\nJust an update: I was able to resolve to above issue but started getting a new shape issue for buffers, have raised an issue on onnx repo as well: [ONNX Issue](https://github.com/microsoft/onnxruntime/issues/5646)\r\n\r\nAny pointers for debugging would be great, and sure it would be awesome if we can get on a call and work on this!! \r\nWill keep trying on my own till then.\r\n\r\n@patil-suraj Good job! I looked at your code and I had tried something very similar. Although am still skeptical as I was getting worse performance with converting both encoder and lm-head rather than only encoder. Will look at your results again.\r\n\r\nThanks again @abelriboulot !",
"as long as we pass same arguments to `generate` then we should get same results, I didn't observe any loss in accuracy.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | CONTRIBUTOR | null | Hey guys,
I didn't know where this belonged so opening up a generic issue.
I was working on integrating the ONNX T5 code by @abelriboulot with the HuggingFace Beam Search decoding code since I already had a decently performing T5 model for summarization and wanted to improve performance on CPU while maintaining the inference accuracy.
It works for the most part, but is slower as the HF code uses cached past state values to speed up the decoding. I got around this issue by creating two decoders with lm-head, one which doesn't take in past values for the initial decoding and another for subsequent steps where past values are considered. This is a bit complicated as the past values have to be flattened out to pass through the ONNX graph which I did and it works for getting back the output.
But for passing the input parameters, I get the following error:
**RUNTIME_EXCEPTION : Non-zero status code returned while running Mul node. Name:'Mul_48' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/element_wise_ops.h:479 void onnxruntime::BroadcastIterator::Init(int64_t, int64_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 2 by 3**
I feel like I am close to the solution which could essentially be added to the repo but this error is tripping me up :(
Any help whatsoever will be appreciated.
Thanks
@mfuntowicz @abelriboulot @patrickvonplaten @patil-suraj @sshleifer
ONNX Export code:
```python
past_state_input_pre = torch.rand((1,12,1,64))
past_state_input_post = torch.rand((1, 12, 10, 64))
past_key_value_states = [(past_state_input_pre, past_state_input_pre, past_state_input_post, past_state_input_post) for i in range(12)]
past_val_outputs = {'past_states_op_'+str(i): {0:'batch', 2: 'sequence'} for i in range(48)}
past_val_inputs = {'past_states_ip' + str(i): {0: 'batch', 2: 'sequence'} for i in range(48)}
dynamix_axes_dict = {
'input_ids': {0:'batch', 1: 'sequence'},
'encoder_hidden_states': {0:'batch', 1: 'sequence'}
}
dynamix_axes_dict.update(past_val_inputs)
dynamix_axes_dict.update({'hidden_states': {0:'batch', 1: 'sequence'}})
dynamix_axes_dict.update(past_val_outputs)
output_names_list = ['hidden_states'] + ['past_states_op_' + str(i) for i in range(48)]
input_names_list = ['input_ids', 'encoder_hidden_states'] + ['past_states_ip' + str(i) for i in range(48)]
# Exports to ONNX
_ = torch.onnx.export(
decoder_with_lm_head,
(torch.tensor([[42]]), simplified_encoder(input_ids), past_key_value_states),
f"{output_prefix}-decoder-with-lm-head.onnx",
export_params=True,
opset_version=12,
input_names=input_names_list,
output_names=output_names_list,
dynamic_axes= dynamix_axes_dict) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8155/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8155/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8154/comments | https://api.github.com/repos/huggingface/transformers/issues/8154/events | https://github.com/huggingface/transformers/issues/8154 | 732,478,374 | MDU6SXNzdWU3MzI0NzgzNzQ= | 8,154 | [s2s] Trainer vs PTL timings | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"I might be misreading progress bars. Running another test, will reopen if I can replicate. ",
"PTL (device 1) using less GPU ram it seems:\r\n\r\n\r\n\r\n\r\nProgress bars (note that PTL/Bottom is per epoch):\r\n\r\n\r\n",
"I'm also experiencing slow down on TPU's, didn't run the new changes on GPU yet. I\"ll investigate this",
"Thx!",
"I've confirmed that builtin ~2x slower on 1 GPU than PTL. Same commands as above on a different machine. All the screenshots above are valid.",
"These seem to run at the same speed if you pass `--fp16_opt_level=O1` to pytorch-lightning. Verifying now and will post results in 5 hrs.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | CONTRIBUTOR | null | For the following two commands,
+ PTL finishes: 2.01 it/s, ~3H, 21.32 Rouge
+ Trainer: 1.0 it/s, roughly 5.5H, 21.36 Rouge
I wanted to report this so I don't lose track of it. Looked at the code, and don't see any obvious issue, besides that the slowdown is suspiciously close to 2x.
Any idea @patil-suraj ?
### PTL Command
```bash
export BS=32
export GAS=1
python finetune.py \
--learning_rate=3e-5 \
--fp16 \
--gpus 1 \
--do_train \
--do_predict \
--val_check_interval 0.25 \
--n_val 500 \
--num_train_epochs 2 \
--freeze_encoder --freeze_embeds --data_dir cnn_dm \
--max_target_length 142 --val_max_target_length=142 \
--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps=$GAS \
--model_name_or_path sshleifer/student_cnn_12_6 \
--tokenizer_name facebook/bart-large \
--warmup_steps 500 \
--output_dir distilbart-cnn-12-6
```
### Trainer command
same as `builtin_trainer/train_distilbart_cnn.sh`:
```bash
export BS=32
export GAS=1
export m=sshleifer/student_cnn_12_6
export tok=facebook/bart-large
export MAX_TGT_LEN=142
python finetune_trainer.py \
--model_name_or_path $m --tokenizer_name $tok \
--data_dir cnn_dm \
--output_dir distilbart-cnn-12-6-trainer --overwrite_output_dir \
--learning_rate=3e-5 --sortish-sampler \
--warmup_steps 500 \
--fp16 \
--n_val 500 \
--gradient_accumulation_steps=$GAS \
--per_device_train_batch_size=$BS --per_device_eval_batch_size=$BS \
--freeze_encoder --freeze_embeds \
--num_train_epochs=2 \
--save_steps 3000 --eval_steps 3000 \
--logging_first_step \
--max_target_length 142 --val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN \
--do_train --do_eval --do_predict --evaluate_during_training \
--predict_with_generate --sortish_sampler
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8154/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8153/comments | https://api.github.com/repos/huggingface/transformers/issues/8153/events | https://github.com/huggingface/transformers/pull/8153 | 732,407,932 | MDExOlB1bGxSZXF1ZXN0NTEyMzUyNzc5 | 8,153 | Add a template for examples and apply it for mlm and plm examples | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR adds a cookiecutter template to add a new example and experiments with it to add the run_mlm new script and a run_plm specific to XLNet. It runs with the same results as the old version.
Side note: the part for random masking applied in a data collator can become platform agnostic later on, if datasets adds a lazy map method. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8153/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8153",
"html_url": "https://github.com/huggingface/transformers/pull/8153",
"diff_url": "https://github.com/huggingface/transformers/pull/8153.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8153.patch",
"merged_at": 1603993092000
} |
https://api.github.com/repos/huggingface/transformers/issues/8152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8152/comments | https://api.github.com/repos/huggingface/transformers/issues/8152/events | https://github.com/huggingface/transformers/pull/8152 | 732,351,712 | MDExOlB1bGxSZXF1ZXN0NTEyMzA2NTM1 | 8,152 | Document tokenizer_class in configurations | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"❤️ "
] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
Some random guy made a PR adding a `tokenizer_class` argument to `PretrainedConfig` but did not document it. This PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8152/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8152",
"html_url": "https://github.com/huggingface/transformers/pull/8152",
"diff_url": "https://github.com/huggingface/transformers/pull/8152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8152.patch",
"merged_at": 1603982626000
} |
https://api.github.com/repos/huggingface/transformers/issues/8151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8151/comments | https://api.github.com/repos/huggingface/transformers/issues/8151/events | https://github.com/huggingface/transformers/pull/8151 | 732,346,593 | MDExOlB1bGxSZXF1ZXN0NTEyMzAyMjgz | 8,151 | Smarter prediction loop and no- -> no_ in console args | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR does two things:
- the first one is to replace `no-` to `no_` in the `HFArgumentParser` so that arguments get a more consistent name: for instance `use_tokenizer_fast` in the new examples script give an argument `no-use_tokenizer_fast` and the inconsistency between - and _ makes it hard to find.
- the second one is to avoid computing the predictions and labels (and storing them) in the evaluation of a `Trainer` when there is no `compute_metrics` function. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8151/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8151",
"html_url": "https://github.com/huggingface/transformers/pull/8151",
"diff_url": "https://github.com/huggingface/transformers/pull/8151.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8151.patch",
"merged_at": 1603983385000
} |
https://api.github.com/repos/huggingface/transformers/issues/8150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8150/comments | https://api.github.com/repos/huggingface/transformers/issues/8150/events | https://github.com/huggingface/transformers/pull/8150 | 732,321,775 | MDExOlB1bGxSZXF1ZXN0NTEyMjgxMzE4 | 8,150 | [s2s] distillBART docs for paper replication | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8150/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8150",
"html_url": "https://github.com/huggingface/transformers/pull/8150",
"diff_url": "https://github.com/huggingface/transformers/pull/8150.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8150.patch",
"merged_at": 1603987275000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8149/comments | https://api.github.com/repos/huggingface/transformers/issues/8149/events | https://github.com/huggingface/transformers/pull/8149 | 732,284,137 | MDExOlB1bGxSZXF1ZXN0NTEyMjUwMDEx | 8,149 | Model card: Update widget examples. | {
"login": "Ethan-yt",
"id": 9592150,
"node_id": "MDQ6VXNlcjk1OTIxNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9592150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ethan-yt",
"html_url": "https://github.com/Ethan-yt",
"followers_url": "https://api.github.com/users/Ethan-yt/followers",
"following_url": "https://api.github.com/users/Ethan-yt/following{/other_user}",
"gists_url": "https://api.github.com/users/Ethan-yt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ethan-yt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ethan-yt/subscriptions",
"organizations_url": "https://api.github.com/users/Ethan-yt/orgs",
"repos_url": "https://api.github.com/users/Ethan-yt/repos",
"events_url": "https://api.github.com/users/Ethan-yt/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ethan-yt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | The previous example in the widget has an error, correct it this time | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8149/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8149",
"html_url": "https://github.com/huggingface/transformers/pull/8149",
"diff_url": "https://github.com/huggingface/transformers/pull/8149.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8149.patch",
"merged_at": 1603975757000
} |
https://api.github.com/repos/huggingface/transformers/issues/8148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8148/comments | https://api.github.com/repos/huggingface/transformers/issues/8148/events | https://github.com/huggingface/transformers/issues/8148 | 732,256,292 | MDU6SXNzdWU3MzIyNTYyOTI= | 8,148 | Masking in Pooling Layer from BERT Output? | {
"login": "datistiquo",
"id": 47474379,
"node_id": "MDQ6VXNlcjQ3NDc0Mzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/47474379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datistiquo",
"html_url": "https://github.com/datistiquo",
"followers_url": "https://api.github.com/users/datistiquo/followers",
"following_url": "https://api.github.com/users/datistiquo/following{/other_user}",
"gists_url": "https://api.github.com/users/datistiquo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datistiquo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datistiquo/subscriptions",
"organizations_url": "https://api.github.com/users/datistiquo/orgs",
"repos_url": "https://api.github.com/users/datistiquo/repos",
"events_url": "https://api.github.com/users/datistiquo/events{/privacy}",
"received_events_url": "https://api.github.com/users/datistiquo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | In Keras when the embedding layers using masking it is propagate to layer afterwards like Pooling or RNN-Layers. I wonder if this holds for using transformers Bert Models? I.e. in the following are the attention masks used in the following Pooling as masking too? So, the averages do not include padding tokens?
```
id_ = Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
mask_ = Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
atn_ = Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
bert_model = TFBertModel.from_pretrained('bert-base-uncased')
embedding = bert_model(id_, attention_mask=mask_, token_type_ids=atn_)[0]
x = GlobalAveragePooling1D()(embedding) #are here attention_mask used as mask?
x = Dropout(0.2)(x)
out = Dense(3, activation='softmax')(x)
model = Model(inputs=[id_, mask_, atn_], outputs=out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=opt)
```
https://discuss.huggingface.co/t/bert-output-for-padding-tokens/1550 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8148/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8147/comments | https://api.github.com/repos/huggingface/transformers/issues/8147/events | https://github.com/huggingface/transformers/pull/8147 | 732,243,610 | MDExOlB1bGxSZXF1ZXN0NTEyMjE1ODQz | 8,147 | [Model cards] Seq2Seq tags | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"👍 "
] | 1,603 | 1,603 | 1,603 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8147/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8147",
"html_url": "https://github.com/huggingface/transformers/pull/8147",
"diff_url": "https://github.com/huggingface/transformers/pull/8147.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8147.patch",
"merged_at": 1603971955000
} |
https://api.github.com/repos/huggingface/transformers/issues/8146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8146/comments | https://api.github.com/repos/huggingface/transformers/issues/8146/events | https://github.com/huggingface/transformers/issues/8146 | 732,204,689 | MDU6SXNzdWU3MzIyMDQ2ODk= | 8,146 | Make tokenizer.pad() also pad `labels` | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | # 🚀 Feature request
Make tokenizer.pad() also pad `labels`
## Motivation
I tried to use this:
https://github.com/huggingface/transformers/blob/8065fea87007fbf7542fc060ff8ddd0b5df567da/src/transformers/data/data_collator.py#L69
But since labels is not padded, the result cannot turn into a tensor. `ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.`
It currently pads `input_ids, attention_mask, token_type_ids, special_tokens_mask`
It seems logical to me that `tokenizer.pad()` should also pad `'labels'`.
## Your contribution
I have already created a PR #8116. It solves the problem above. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8146/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8146/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8145/comments | https://api.github.com/repos/huggingface/transformers/issues/8145/events | https://github.com/huggingface/transformers/issues/8145 | 732,151,447 | MDU6SXNzdWU3MzIxNTE0NDc= | 8,145 | TransformerXL: StopIteration: Caught StopIteration in replica 0 on device 0 | {
"login": "davidliujiafeng",
"id": 20847058,
"node_id": "MDQ6VXNlcjIwODQ3MDU4",
"avatar_url": "https://avatars.githubusercontent.com/u/20847058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidliujiafeng",
"html_url": "https://github.com/davidliujiafeng",
"followers_url": "https://api.github.com/users/davidliujiafeng/followers",
"following_url": "https://api.github.com/users/davidliujiafeng/following{/other_user}",
"gists_url": "https://api.github.com/users/davidliujiafeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidliujiafeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidliujiafeng/subscriptions",
"organizations_url": "https://api.github.com/users/davidliujiafeng/orgs",
"repos_url": "https://api.github.com/users/davidliujiafeng/repos",
"events_url": "https://api.github.com/users/davidliujiafeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidliujiafeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The same code I tested on GPT-2, works fine for me. Guess something wrong with transformer-xl\r\n\r\nGPT-2 code below:\r\n```Python\r\nimport torch\r\nfrom torch.nn import DataParallel\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\ndevice = \"cuda:0\"\r\n\r\n# Get model\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\nmodel = DataParallel(model, device_ids=list(range(torch.cuda.device_count())))\r\nmodel.to(device=device)\r\n\r\n# Run forward\r\ninputs = tokenizer([\"This is an example\"], return_tensors=\"pt\")\r\noutputs = model(\r\n input_ids=inputs[\"input_ids\"].to(device),\r\n attention_mask=inputs[\"attention_mask\"].to(device),\r\n labels=inputs[\"input_ids\"].to(device),\r\n)\r\n\r\nprint(f\"outputs: {outputs}\")\r\nprint(\"Success.\")\r\n```",
"Seems like this was overlooked in #4300 ! I'll update TransfoXL in the same way.",
"As of now, Pytorch [doesn't support calling](https://github.com/pytorch/pytorch/issues/40457) `self.parameters()` within `DataParallel`, which causes the current issue. Even after fixing that, which was straightforward, Pytorch [also doesn't support calling](https://github.com/pytorch/pytorch/issues/36035) `self.ParameterList` and `self.ParameterDict`, which are also used in TransfoXL, which will cause another issue. As Pytorch is moving people away from `DataParallel`, they are unlikely to fix this anytime soon on their end. On our end, this is going to be much harder to fix in a non-BC way, as changing the way the model is organized means previous checkpoints cannot be loaded. In the meantime, you could use `DistributedDataParallel` instead. ",
"I used `torch.nn.parallel.DistributedDataParallel` to run the model in forward pass with the script below:\r\n```python\r\nimport os\r\nimport sys\r\nimport tempfile\r\nimport torch\r\nimport torch.distributed as dist\r\nimport torch.nn as nn\r\nimport torch.optim as optim\r\nimport torch.multiprocessing as mp\r\nfrom torch.nn.parallel import DistributedDataParallel as DDP\r\nfrom transformers import TransfoXLTokenizer, TransfoXLLMHeadModel\r\n\r\ndef setup(rank, world_size):\r\n os.environ['MASTER_ADDR'] = 'localhost'\r\n os.environ['MASTER_PORT'] = '12355'\r\n\r\n # initialize the process group\r\n dist.init_process_group(\"gloo\", rank=rank, world_size=world_size)\r\n\r\ndef demo_model_parallel(rank, world_size):\r\n print(f\"Running DDP with model parallel example on rank {rank}.\")\r\n setup(rank, world_size)\r\n\r\n # transfoXL model\r\n tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')\r\n mp_model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103', return_dict=True)\r\n ddp_mp_model = DDP(mp_model, find_unused_parameters=True)\r\n loss_fn = nn.MSELoss()\r\n optimizer = optim.SGD(ddp_mp_model.parameters(), lr=0.001)\r\n \r\n for i in range(10):\r\n optimizer.zero_grad()\r\n # check to see if the model returns different losses\r\n if rank == 0:\r\n inputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\n else:\r\n inputs = tokenizer(\"Borat and the republic of Kazakhistan!\", return_tensors = \"pt\")\r\n outputs = ddp_mp_model(input_ids = inputs[\"input_ids\"], labels=inputs[\"input_ids\"], return_dict = True)\r\n _l = outputs.losses.mean() # documentation is incorrect there is no `loss` but `losses`\r\n print(_l)\r\n _l.backward()\r\n optimizer.step()\r\n\r\ndef run_demo(demo_fn, world_size):\r\n mp.spawn(demo_fn,\r\n args=(world_size,),\r\n nprocs=world_size,\r\n join=True)\r\n \r\n \r\nif __name__ == \"__main__\":\r\n run_demo(demo_model_parallel, 2)\r\n```\r\n\r\nHowever during backward pass I get this error:\r\n```\r\nRuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that\r\nyour module has parameters that were not used in producing loss. You can enable unused parameter detection by (1)\r\npassing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making\r\nsure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the\r\ndistributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward`\r\nfunction. Please include the loss function and the structure of the return value of `forward` of your module when reporting\r\nthis issue (e.g. list, dict, iterable).\r\n```\r\n\r\nThe code is modified from [tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html)",
"@TevenLeScao \r\nI have the same error when trainning TransformerXL.\r\n\r\nFile \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py\", line 60, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py\", line 1056, in forward\r\n return_dict=return_dict,\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py\", line 866, in forward\r\n mems = self.init_mems(bsz)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py\", line 800, in init_mems\r\n param = next(self.parameters())\r\nStopIteration\r\n\r\nhow to solve it quikly?",
"@yashbonde this seems to be an unrelated issue! I'll take a look tomorrow.\r\n@ismymajia see my message above - this is a Pytorch issue that we cannot fix without breaking backwards compatibility of checkpoints, as they're slowly stopping support for `DataParallel`.",
"Now the problem is that huggingface tansformer-xl model cannot be trained. huggingface tansformer-xl model will not be supported? Do you plan to update the tansformer-xl code? \r\n@TevenLeScao",
"As I said in my previous post, you can just use single-GPU or distributed training instead. Of course transformer-xl is supported ; but we cannot update its code to bypass the Pytorch issues with `DataParallel` without breaking backwards compatibility with previous checkpoints.",
"when i train tansformer-xl as below:\r\n\r\npython -m torch.distributed.launch --nproc_per_node 4 run_language_modeling.py --output_dir ${model_dir} \\\r\n\t --tokenizer_name $data_dir/wordpiece-custom.json \\\r\n\t --config_name $data_dir/$config_file \\\r\n --train_data_files \"$data_dir/train*.txt\" \\\r\n --eval_data_file $data_dir/valid.txt \\\r\n --block_size=128 \\\r\n --do_train \\\r\n\t --do_eval \\\r\n --per_device_train_batch_size 4 \\\r\n --gradient_accumulation_steps 1 \\\r\n --learning_rate 6e-4 \\\r\n --weight_decay 0.01 \\\r\n --adam_epsilon 1e-6 \\\r\n --adam_beta1 0.9 \\\r\n --adam_beta2 0.98 \\\r\n --max_steps 500_000 \\\r\n --warmup_steps 24_000 \\\r\n --fp16 \\\r\n --logging_dir ${model_dir}/tensorboard \\\r\n --save_steps 5000 \\\r\n --save_total_limit 20 \\\r\n --seed 108 \\\r\n --max_steps -1 \\\r\n --num_train_epochs 20 \\\r\n\t --dataloader_num_workers 0 \\\r\n --overwrite_output_dir \r\n\r\noccur error:\r\n\r\n[INFO|language_modeling.py:324] 2020-11-11 13:50:49,520 >> Loading features from cached file /opt/ml/input/data/training/mm/huggingface/data/train40G/cached_lm_PreTrainedTokenizerFast_126_train3.txt [took 93.739 s]\r\n[INFO|language_modeling.py:324] 2020-11-11 13:52:30,959 >> Loading features from cached file /opt/ml/input/data/training/mm/huggingface/data/train40G/cached_lm_PreTrainedTokenizerFast_126_train2.txt [took 101.436 s]\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 350, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 313, in main\r\n trainer.train(model_path=model_path)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 657, in train\r\n else True\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py\", line 333, in __init__\r\n self.broadcast_bucket_size)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py\", line 549, in _distributed_broadcast_coalesced\r\n dist._broadcast_coalesced(self.process_group, tensors, buffer_size)\r\nRuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:518, unhandled cuda error, NCCL version 2.4.8\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py\", line 261, in <module>\r\n main()\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py\", line 257, in main\r\n cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/usr/bin/python', '-u', 'run_language_modeling.py', '--local_rank=3', '--output_dir', '/opt/ml/input/data/training/mm/huggingface/data/20201107/checkpoints/transfo-xl_1L_dembed1024_dhead64_dInner4096_dmodel1024_heads16_1', '--tokenizer_name', '/opt/ml/input/data/training/mm/huggingface/data/20201107/wordpiece-custom.json', '--config_name', '/opt/ml/input/data/training/mm/huggingface/data/20201107/config-transfo-xl.json', '--train_data_files', '/opt/ml/input/data/training/mm/huggingface/data/20201107/train*.txt', '--eval_data_file', '/opt/ml/input/data/training/mm/huggingface/data/20201107/valid.txt', '--block_size=128', '--do_train', '--do_eval', '--per_device_train_batch_size', '16', '--gradient_accumulation_steps', '1', '--learning_rate', '6e-4', '--weight_decay', '0.01', '--adam_epsilon', '1e-6', '--adam_beta1', '0.9', '--adam_beta2', '0.98', '--max_steps', '500_000', '--warmup_steps', '24_000', '--fp16', '--logging_dir', '/opt/ml/input/data/training/mm/huggingface/data/20201107/checkpoints/transfo-xl_1L_dembed1024_dhead64_dInner4096_dmodel1024_heads16_1/tensorboard', '--save_steps', '5000', '--save_total_limit', '20', '--seed', '108', '--max_steps', '-1', '--num_train_epochs', '20', '--overwrite_output_dir']' died with <Signals.SIGKILL: 9>.\r\n*****************************************\r\nSetting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. \r\n*****************************************\r\n\r\nmy env is below:\r\npytorch:1.6+cu101\r\ntransformer 3.4\r\ntokenizer 0.9.3\r\n\r\n@TevenLeScao How to solve it ? ",
"Hey, looking at the error message (the SIGKILL) this looks more like the machine killing the process than like a bug. What's your setup? This happens if the machine runs out of RAM for example. ",
"I am trainning the transformer-xl on one machine with multi-gpus by ddp.\r\nI don't know if this is a problem.\r\n@TevenLeScao \r\n ",
"Hey, usually when you get a mysterious CUDA error like this (\"RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:518, unhandled cuda error, NCCL version 2.4.8\") it's because of GPU memory. I'll close the issue now as this is unrelated, and does not particularly look like a library bug. You should probably post on the forums at https://discuss.huggingface.co/ to see if you can get help with debugging!",
"[INFO|language_modeling.py:242] 2020-11-11 11:54:46,363 >> Loading features from cached file /opt/ml/input/data/training/kyzhan/huggingface/data/train40G/cached_lm_PreTrainedTokenizerFast_126_train3.txt [took 116.431 s]\r\n/ _th_index_copy_\r\n main()\r\n File \"run_hf_train_lm_ti.py\", line 338, in main\r\n trainer.train(model_path=model_path)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 758, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 1056, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 1082, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py\", line 511, in forward\r\n output = self.module(*inputs[0], **kwargs[0])\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py\", line 1056, in forward\r\n return_dict=return_dict,\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py\", line 888, in forward\r\n word_emb = self.word_emb(input_ids)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py\", line 448, in forward\r\n emb_flat.index_copy_(0, indices_i, emb_i)\r\nRuntimeError: Expected object of scalar type Float but got scalar type Half for argument #4 'source' in call to _th_index_copy_\r\n\r\nNow encounter this problem. @TevenLeScao",
"I'm closing this issue as it concerns an unrelated problem that we cannot solve. Can you open a new one with a complete description ?"
] | 1,603 | 1,605 | 1,605 | NONE | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-debian-stretch-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
@TevenLeScao
## Error I get
```
Traceback (most recent call last):
File "/ai/fzc/minGPT/transformerXLtest.py", line 163, in <module>
input_ids=inputs["input_ids"].to(device),
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/opt/conda/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
StopIteration: Caught StopIteration in replica 0 on device 0.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_transfo_xl.py", line 866, in forward
mems = self.init_mems(bsz)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_transfo_xl.py", line 800, in init_mems
param = next(self.parameters())
StopIteration
```
## To reproduce the problem
Run Code below:
```python
import torch
from torch.nn import DataParallel
from transformers import TransfoXLTokenizer, TransfoXLModel
device = "cuda:0"
# Get model
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
model = TransfoXLModel.from_pretrained('transfo-xl-wt103', return_dict=True)
model = DataParallel(model, device_ids=list(range(torch.cuda.device_count())))
model.to(device=device)
# Run forward
inputs = tokenizer(["This is an example"], return_tensors="pt")
outputs = model(
input_ids=inputs["input_ids"].to(device),
)
print(f"outputs: {outputs}")
print("Success.")
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8145/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8144/comments | https://api.github.com/repos/huggingface/transformers/issues/8144/events | https://github.com/huggingface/transformers/issues/8144 | 732,107,799 | MDU6SXNzdWU3MzIxMDc3OTk= | 8,144 | ETA on TFEncoderDecoderModel and is BERTShare from https://arxiv.org/pdf/1907.12461.pdf planned? | {
"login": "anicolson",
"id": 26111230,
"node_id": "MDQ6VXNlcjI2MTExMjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26111230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anicolson",
"html_url": "https://github.com/anicolson",
"followers_url": "https://api.github.com/users/anicolson/followers",
"following_url": "https://api.github.com/users/anicolson/following{/other_user}",
"gists_url": "https://api.github.com/users/anicolson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anicolson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anicolson/subscriptions",
"organizations_url": "https://api.github.com/users/anicolson/orgs",
"repos_url": "https://api.github.com/users/anicolson/repos",
"events_url": "https://api.github.com/users/anicolson/events{/privacy}",
"received_events_url": "https://api.github.com/users/anicolson/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I think we can keep this open, this looks like a fun project. Pinging @patrickvonplaten to let him know!",
"The models of https://arxiv.org/pdf/1907.12461.pdf are already added. You can check them out here (they are not called shared, but are shared indeed): https://huggingface.co/models?search=google%2Froberta2roberta\r\n\r\nAlso, I'll be releasing an in-detail notebook about these models on Monday, so stay tuned :-) \r\n\r\nNo ETA on TFEncoderDecoder models, but it's definitely on the roadmap :-) ",
"> The models of https://arxiv.org/pdf/1907.12461.pdf are already added. You can check them out here (they are not called shared, but are shared indeed): https://huggingface.co/models?search=google%2Froberta2roberta\r\n> \r\n> Also, I'll be releasing an in-detail notebook about these models on Monday, so stay tuned :-)\r\n> \r\n> No ETA on TFEncoderDecoder models, but it's definitely on the roadmap :-)\r\n\r\nThanks, I am switching from TF to PyTorch :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | # 🚀 Feature request
Is there a plan for BERTShare from https://arxiv.org/pdf/1907.12461.pdf to be an option for the EncoderDecoderModel?
Also, I can see that an TFEncoderDecoderModel is on the 'ToDo' list for the [EncoderDecoder Framework](https://github.com/huggingface/transformers/projects/23). Any chance of an expected time of completion of this would be greatly appreciated.
## Motivation
Having an easy to use seq2seq model integrated into hugging face (with TensorFlow) would help my research immensely. Also, models like BERTShare are much more parameter efficient.
## Your contribution
I am happy to help in any form. Not sure where help is needed tbh.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8144/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/8144/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8143/comments | https://api.github.com/repos/huggingface/transformers/issues/8143/events | https://github.com/huggingface/transformers/issues/8143 | 732,083,397 | MDU6SXNzdWU3MzIwODMzOTc= | 8,143 | Trainer makes RAM go out of memory after a while | {
"login": "Maxinho96",
"id": 26610682,
"node_id": "MDQ6VXNlcjI2NjEwNjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/26610682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Maxinho96",
"html_url": "https://github.com/Maxinho96",
"followers_url": "https://api.github.com/users/Maxinho96/followers",
"following_url": "https://api.github.com/users/Maxinho96/following{/other_user}",
"gists_url": "https://api.github.com/users/Maxinho96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Maxinho96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Maxinho96/subscriptions",
"organizations_url": "https://api.github.com/users/Maxinho96/orgs",
"repos_url": "https://api.github.com/users/Maxinho96/repos",
"events_url": "https://api.github.com/users/Maxinho96/events{/privacy}",
"received_events_url": "https://api.github.com/users/Maxinho96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Additional info:\r\nas a workaround, I am now using a smaller validation set, but it is not ideal. If the memory issue can't be solved, a better solution could be to introduce an option to use a random subset of the validation set to use to evaluate during training.",
"If the problem is just that the RAM is not freed after evaluation, we can try to work around that (though Python garbage collector can be tricky to trigger).\r\n\r\nIf the validation set gives predictions that do not fit in RAM, we can't do much in the generic Trainer directly. You can subclass `Trainer` and the `evaluate` function to use the `datasets` library `Metric` objects, which store the predictions with arrows so use less RAM.",
"> If the problem is just that the RAM is not freed after evaluation, we can try to work around that (though Python garbage collector can be tricky to trigger).\r\n\r\nI think the problem is not this one. The RAM is freed after evaluation (after some seconds), but it is not freed between an evaluation single step and the other. Correct me if I am wrong, but after a step the only thing to keep in RAM should be the loss, so it can be averaged at the end of evaluation, so the RAM usage should not increase as the steps go ahead, which instead is what happens.",
"During evaluation, we need to store predictions and labels too, for the metric computation. If you want to store the loss only, then pass along the flag `prediction_loss_only=True` to your training arguments, which will use less more RAM (and you can then probably remove the `eval_accumulation_steps=1` to speed up evaluation).",
"I didn't know that, it solved my problem thank you!",
"Should even be automatic now as I just merged a PR on master where the Trainer does not bother saving the predictions when there is no `compute_metrics` (which is your case here)."
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-4.14.193-113.317.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.10
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@sgugger @patrickvonplaten
## Information
Model I am using: T5
The problem arises when using my own modified scripts:
I load my dataset this way:
def tokenize(batch):
tokenized_input = tokenizer(batch[text_column], padding=True, truncation=True, max_length=153)
tokenized_label = tokenizer(batch[generated_column], padding=True, truncation=True, max_length=274)
tokenized_input['labels'] = tokenized_label['input_ids']
return tokenized_input
dataset = load_dataset('csv', data_files=dataset_file, split='train')
dataset = dataset.train_test_split(test_size=0.05, seed=SEED)
train_dataset = dataset['train']
val_dataset = dataset['test']
train_dataset = train_dataset.map(tokenize, batched=True, batch_size=len(train_dataset))
val_dataset = val_dataset.map(tokenize, batched=True, batch_size=len(val_dataset))
train_dataset.set_format('numpy', columns=['input_ids', 'attention_mask', 'labels'])
val_dataset.set_format('numpy', columns=['input_ids', 'attention_mask', 'labels'])
And then I use Trainer to train my T5 model like this:
training_args = TrainingArguments(
output_dir=output_dir,
num_train_epochs=1,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
eval_accumulation_steps=1,
learning_rate=0.001,
evaluation_strategy='steps',
save_steps=1000000,
save_total_limit=1,
remove_unused_columns=True,
run_name=now,
logging_steps=100,
eval_steps=100,
logging_first_step=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset
)
trainer.train()
The tasks I am working on is my own task or dataset:
I am using a custom dataset for machine translation which has 12MB size and 18.000 examples. The sequence max token sizes are 153 for input and 274 for output. I have also added 68 special tokens as the dataset has many symbols in it.
## To reproduce
Steps to reproduce the behavior:
1. Load a dataset like I did.
2. Start training using Trainer
3. During every evaluation, RAM usage grows and is not freed. So the next evaluation step accumulates other RAM and so on, until you reach the maximum and the training stops giving this error: `RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 281882432 bytes. Error code 12 (Cannot allocate memory)` (The machine I am using has 60GB RAM).
## Expected behavior
The evaluation RAM should be freed after every step. Looks like something gets accumulated while training and RAM is not freed. I get the same behavior if I don't run training but only evaluation: after many evaluation steps the RAM blows up.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8143/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8142/comments | https://api.github.com/repos/huggingface/transformers/issues/8142/events | https://github.com/huggingface/transformers/issues/8142 | 732,060,842 | MDU6SXNzdWU3MzIwNjA4NDI= | 8,142 | Is there any Jupyter notebook or detailed example using BertGeneration or EncoderDecoderModel classes? | {
"login": "kseh92",
"id": 40313671,
"node_id": "MDQ6VXNlcjQwMzEzNjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/40313671?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kseh92",
"html_url": "https://github.com/kseh92",
"followers_url": "https://api.github.com/users/kseh92/followers",
"following_url": "https://api.github.com/users/kseh92/following{/other_user}",
"gists_url": "https://api.github.com/users/kseh92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kseh92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kseh92/subscriptions",
"organizations_url": "https://api.github.com/users/kseh92/orgs",
"repos_url": "https://api.github.com/users/kseh92/repos",
"events_url": "https://api.github.com/users/kseh92/events{/privacy}",
"received_events_url": "https://api.github.com/users/kseh92/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Releasing in ~1 week - it's almost ready :-) ",
"Thanks for letting me know! :)",
"I've released two condensed notebooks as mentioned here: https://discuss.huggingface.co/t/leveraging-pre-trained-checkpoints-for-summarization/835/13?u=patrickvonplaten\r\n\r\nWill also release a longer educational blog post in a bit.",
"https://huggingface.co/blog/warm-starting-encoder-decoder"
] | 1,603 | 1,604 | 1,604 | NONE | null | I have been looking to do some seq2seq tasks in the huggingface-transformers using BertGeneration or EncoderDecoderModel classes.
But I only have ended up finding some simple examples described in the API documentation like below.
```
>>> import torch
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
>>> model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints
>>> # forward
>>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
>>> outputs = model(input_ids=input_ids, decoder_input_ids=input_ids)
>>> # training
>>> outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids, return_dict=True)
>>> loss, logits = outputs.loss, outputs.logits
>>> # save and load from pretrained
>>> model.save_pretrained("bert2bert")
>>> model = EncoderDecoderModel.from_pretrained("bert2bert")
>>> # generation
>>> generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)
```
Is there any Jupyter notebook or detailed example using BertGeneration or EncoderDecoderModel classes specifically? Even though I already know that these classes are released quite recently...
It would be a great help for me if I could find one. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8142/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8141/comments | https://api.github.com/repos/huggingface/transformers/issues/8141/events | https://github.com/huggingface/transformers/issues/8141 | 732,034,100 | MDU6SXNzdWU3MzIwMzQxMDA= | 8,141 | Vocab files missing in community pre-trained t5 model | {
"login": "penatbater",
"id": 37921244,
"node_id": "MDQ6VXNlcjM3OTIxMjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/37921244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penatbater",
"html_url": "https://github.com/penatbater",
"followers_url": "https://api.github.com/users/penatbater/followers",
"following_url": "https://api.github.com/users/penatbater/following{/other_user}",
"gists_url": "https://api.github.com/users/penatbater/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penatbater/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penatbater/subscriptions",
"organizations_url": "https://api.github.com/users/penatbater/orgs",
"repos_url": "https://api.github.com/users/penatbater/repos",
"events_url": "https://api.github.com/users/penatbater/events{/privacy}",
"received_events_url": "https://api.github.com/users/penatbater/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm guessing you can use the tokenizer from t5-base (https://huggingface.co/t5-base#list-files) but @sshleifer can confirm or infirm",
"> I'm guessing you can use the tokenizer from t5-base (https://huggingface.co/t5-base#list-files) but @sshleifer can confirm or infirm\r\n\r\nThis is what I used in the interim. I'm just not sure if there are some implications with using a different tokenizer with the fine-tuned model.",
"Correct all t5 tokenizers are identical. There will be no issue."
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-1028-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
Summarization: @sshleifer
## Information
I am trying to use the sshleifer/t5-base-cnn for summarization task, but there seems to be an issue with the tokenizer portion. I tried looking at the files part in https://huggingface.co/sshleifer/t5-base-cnn# and there doesn't seem to be a vocab file there.
>tokenizer = AutoTokenizer.from_pretrained("sshleifer/t5-base-cnn")
>model = AutoModelWithLMHead.from_pretrained("sshleifer/t5-base-cnn")
>OSError: Model name 'sshleifer/t5-base-cnn' was not found in tokenizers model name list (t5-small, t5-base, t5-large
, t5-3b, t5-11b). We assumed 'sshleifer/t5-base-cnn' was a path, a model identifier, or url to a directory containin
g vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8141/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8140/comments | https://api.github.com/repos/huggingface/transformers/issues/8140/events | https://github.com/huggingface/transformers/issues/8140 | 731,955,035 | MDU6SXNzdWU3MzE5NTUwMzU= | 8,140 | Customize tokenizer in model card's widget | {
"login": "Ethan-yt",
"id": 9592150,
"node_id": "MDQ6VXNlcjk1OTIxNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9592150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ethan-yt",
"html_url": "https://github.com/Ethan-yt",
"followers_url": "https://api.github.com/users/Ethan-yt/followers",
"following_url": "https://api.github.com/users/Ethan-yt/following{/other_user}",
"gists_url": "https://api.github.com/users/Ethan-yt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ethan-yt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ethan-yt/subscriptions",
"organizations_url": "https://api.github.com/users/Ethan-yt/orgs",
"repos_url": "https://api.github.com/users/Ethan-yt/repos",
"events_url": "https://api.github.com/users/Ethan-yt/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ethan-yt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I tried to use `BertModel` instead of `RobertaModel` (copy weights from Roberta to Bert). But the position embedding is different. And the outputs are different... So I have to use this combination of `RobertaModel` and `BertTokenizer`. Is that mean I can't use the inference widget?",
"Yes, this is possible. See https://github.com/huggingface/transformers/commit/ed71c21d6afcbfa2d8e5bb03acbb88ae0e0ea56a, you should add a `tokenizer_class` attribute to your config.json with the tokenizer class you want to use.\r\n\r\ncc @sgugger @LysandreJik I have no idea if this is currently documented or just in the code 🤭",
"> Yes, this is possible. See [ed71c21](https://github.com/huggingface/transformers/commit/ed71c21d6afcbfa2d8e5bb03acbb88ae0e0ea56a), you should add a `tokenizer_class` attribute to your config.json with the tokenizer class you want to use.\r\n> \r\n> cc @sgugger @LysandreJik I have no idea if this is currently documented or just in the code 🤭\r\n\r\nThank you! It works. I think you are right and I did not find this configuration in the documentation: https://huggingface.co/transformers/main_classes/configuration.html",
"Looks like that guy who made the PR did not document the new argument he added :-p ",
"arg, who does that guy think he is? 😂"
] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | I trained a Chinese Roberta model. In the model card, the widget uses a tokenizer defined in config.json(`RobertaTokenizer`). But my model uses `BertTokenizer`. Can I customize the tokenizer in the widget of the model card just like I can choose any combination of model and tokenizer in a pipeline? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8140/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8139/comments | https://api.github.com/repos/huggingface/transformers/issues/8139/events | https://github.com/huggingface/transformers/pull/8139 | 731,936,337 | MDExOlB1bGxSZXF1ZXN0NTExOTYxNDg5 | 8,139 | Fix doc errors and typos across the board | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,604 | 1,603 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8139/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8139",
"html_url": "https://github.com/huggingface/transformers/pull/8139",
"diff_url": "https://github.com/huggingface/transformers/pull/8139.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8139.patch",
"merged_at": 1603982013000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8138/comments | https://api.github.com/repos/huggingface/transformers/issues/8138/events | https://github.com/huggingface/transformers/issues/8138 | 731,927,505 | MDU6SXNzdWU3MzE5Mjc1MDU= | 8,138 | How to get translation of one batch of sentences after batch_encode_plus? | {
"login": "OOF-dura",
"id": 22954901,
"node_id": "MDQ6VXNlcjIyOTU0OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/22954901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OOF-dura",
"html_url": "https://github.com/OOF-dura",
"followers_url": "https://api.github.com/users/OOF-dura/followers",
"following_url": "https://api.github.com/users/OOF-dura/following{/other_user}",
"gists_url": "https://api.github.com/users/OOF-dura/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OOF-dura/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OOF-dura/subscriptions",
"organizations_url": "https://api.github.com/users/OOF-dura/orgs",
"repos_url": "https://api.github.com/users/OOF-dura/repos",
"events_url": "https://api.github.com/users/OOF-dura/events{/privacy}",
"received_events_url": "https://api.github.com/users/OOF-dura/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, have you read the docs concerning the translation task? It is [available here](https://huggingface.co/transformers/task_summary.html#translation).\r\n\r\nSince you're specifically asking about a Helsinki model, you can find the documentation, with examples, [here](https://huggingface.co/transformers/model_doc/marian.html#multilingual-models)."
] | 1,603 | 1,604 | 1,604 | NONE | null | ```
model = AutoModelWithLMHead.from_pretrained("Helsinki-NLP/opus-mt-es-en")
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-es-en")
batch_input_str = (("Mary spends $20 on pizza"), ("She likes eating it"), ("The pizza was great"))
encoded = (tokenizer.batch_encode_plus(batch_input_str, pad_to_max_length=True))
```
The ```encoded```is like:
```
{'input_ids': [[4963, 10154, 5021, 9, 25, 1326, 2255, 35, 17462, 0], [552, 3996, 2274, 9, 129, 75, 2223, 25, 1370, 0], [42, 17462, 12378, 9, 25, 5807, 1949, 0, 65000, 65000]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]}
```
Then, should I just pass the ```encoded``` to
```
output = model.generate(a)
```
And then use
```
res = tokenizer.decode(output)
```
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8138/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8137/comments | https://api.github.com/repos/huggingface/transformers/issues/8137/events | https://github.com/huggingface/transformers/issues/8137 | 731,914,635 | MDU6SXNzdWU3MzE5MTQ2MzU= | 8,137 | In built code not able to download for "bert-base-uncased" when running on cluster. | {
"login": "Souravroych",
"id": 51218100,
"node_id": "MDQ6VXNlcjUxMjE4MTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/51218100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Souravroych",
"html_url": "https://github.com/Souravroych",
"followers_url": "https://api.github.com/users/Souravroych/followers",
"following_url": "https://api.github.com/users/Souravroych/following{/other_user}",
"gists_url": "https://api.github.com/users/Souravroych/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Souravroych/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Souravroych/subscriptions",
"organizations_url": "https://api.github.com/users/Souravroych/orgs",
"repos_url": "https://api.github.com/users/Souravroych/repos",
"events_url": "https://api.github.com/users/Souravroych/events{/privacy}",
"received_events_url": "https://api.github.com/users/Souravroych/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It seems that you have no internet access",
"Thank You. We also came to know that the cluster doesn't have internet access. I can manually download it and put that in a cache folder, if that is possible, can you please suggest where we can put this in a cache folder so that it could access from that place.",
"You could put it in any folder and point to that folder instead! The `from_pretrained` method takes either an identifier to point to the S3 bucket, or a local path containing the required files. \r\n\r\nThe files must be named correctly, however (`pytorch_model.bin` for the PT model, `tf_model.h5` for the TF model, and `config.json` for the configuration).\r\n\r\nI guess the easiest for you would be to do something like the following:\r\n\r\n1# Create the model cache\r\n```shell-script\r\nmkdir model_cache\r\ncd model_cache\r\npython\r\n```\r\n2# Download and save the models to the cache (here are two examples with BERT and RoBERTa)\r\n```py\r\n# When doing this you must be careful that the architectures you're using contain all the trained layers that\r\n# you will need in your task. Using the architectures with which they were pre-trained makes sure to contain\r\n# all of these layers\r\nfrom transformers import BertForPreTraining, BertTokenizer, RobertaForMaskedLM, RobertaTokenizer\r\n\r\nBertForPreTraining.from_pretrained(\"bert-base-cased\").save_pretrained(\"bert-cache\")\r\nBertTokenizer.from_pretrained(\"bert-base-cased\").save_pretrained(\"bert-cache\")\r\n\r\nRobertaForMaskedLM.from_pretrained(\"roberta-base\").save_pretrained(\"roberta-cache\")\r\nRobertaTokenizer.from_pretrained(\"roberta-base\").save_pretrained(\"roberta-cache\")\r\n```\r\nYou can check that the folder now contains all the appropriate files:\r\n\r\n```shell-script\r\nls -LR\r\n\r\n# Outputs the following\r\n./bert-cache:\r\nconfig.json pytorch_model.bin special_tokens_map.json tokenizer_config.json vocab.txt\r\n\r\n./roberta-cache:\r\nconfig.json merges.txt pytorch_model.bin special_tokens_map.json tokenizer_config.json vocab.json\r\n\r\n```\r\n\r\nYou can then move your folder `model_cache` to your machine which has no internet access. Hope that helps.",
"Thanks a lot for the detailed explanation. \r\nI followed your steps and saved the checkpoints in model_cache and uncased_l12 (with same contents).However it is showing a keyerrror when it is referencing the model_cache folder\r\n\r\nINFO:tensorflow:Extracting pretrained word embeddings weights from BERT\r\n2020-10-30 14:37:43.909781: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\r\nSome layers from the model checkpoint at /users/sroychou/uncased_l12/ were not used when initializing TFBertModel: ['nsp___cls', 'mlm___cls']\r\n- This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nAll the layers of TFBertModel were initialized from the model checkpoint at /users/sroychou/uncased_l12/.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertModel for predictions without further training.\r\nINFO:tensorflow:Embedding matrix shape '(30522, 768)'\r\nINFO:tensorflow:Loading Pre-trained BERT model for BERT SCORE calculation\r\nsetting default value to last_recorded_value\r\nTraceback (most recent call last):\r\n File \"/users/sroychou/BERT_text_summarisation/scripts/train_bert_summarizer.py\", line 12, in <module>\r\n from metrics import optimizer, loss_function, label_smoothing, get_loss_and_accuracy, tf_write_summary, monitor_run\r\n File \"/users/sroychou/BERT_text_summarisation/scripts/metrics.py\", line 16, in <module>\r\n _, _, _ = b_score([\"I'm Batman\"], [\"I'm Spiderman\"], lang='en', model_type='/users/sroychou/model_cache/')\r\n File \"/users/sroychou/.local/lib/python3.7/site-packages/bert_score/score.py\", line 100, in score\r\n num_layers = model2layers[model_type]\r\nKeyError: '/users/sroychou/model_cache/'\r\n\r\n\r\nIs there something I am doing wrong ? Been stuck on this for sometime. \r\n\r\n ",
"Hmm well it seems that is an issue with `bert_score`? I don't know what is `BERT_text_summarisation`, I don't know what is the `metrics` script, and I do not know what is the `bert_score` package. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | Traceback (most recent call last):
File "/users/sroychou/BERT_text_summarisation/scripts/train_bert_summarizer.py", line 12, in <module>
from metrics import optimizer, loss_function, label_smoothing, get_loss_and_accuracy, tf_write_summary, monitor_run
File "/users/sroychou/BERT_text_summarisation/scripts/metrics.py", line 16, in <module>
_, _, _ = b_score(["I'm Batman"], ["I'm Spiderman"], lang='en', model_type='bert-base-uncased')
File "/users/sroychou/.local/lib/python3.7/site-packages/bert_score/score.py", line 105, in score
tokenizer = AutoTokenizer.from_pretrained(model_type)
File "/users/sroychou/.local/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 298, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/users/sroychou/.local/lib/python3.7/site-packages/transformers/configuration_auto.py", line 330, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/users/sroychou/.local/lib/python3.7/site-packages/transformers/configuration_utils.py", line 382, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'bert-base-uncased'. Make sure that:
- 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-uncased' is the correct path to a directory containing a config.json file | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8137/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8136/comments | https://api.github.com/repos/huggingface/transformers/issues/8136/events | https://github.com/huggingface/transformers/issues/8136 | 731,888,590 | MDU6SXNzdWU3MzE4ODg1OTA= | 8,136 | How to perform model.predict loop with TFRobertaForSequenceClassification? | {
"login": "MiriamFarber",
"id": 35157503,
"node_id": "MDQ6VXNlcjM1MTU3NTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/35157503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MiriamFarber",
"html_url": "https://github.com/MiriamFarber",
"followers_url": "https://api.github.com/users/MiriamFarber/followers",
"following_url": "https://api.github.com/users/MiriamFarber/following{/other_user}",
"gists_url": "https://api.github.com/users/MiriamFarber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MiriamFarber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MiriamFarber/subscriptions",
"organizations_url": "https://api.github.com/users/MiriamFarber/orgs",
"repos_url": "https://api.github.com/users/MiriamFarber/repos",
"events_url": "https://api.github.com/users/MiriamFarber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MiriamFarber/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, this [Kaggle notebook](https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta) shows a very concise way to efficiently train/predict Huggingface's `XLMRoberta` (which is the same format as `Roberta`) . Hope it help!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | I'd like to perform inference loop for the following roberta model:
```
model = TFRobertaForSequenceClassification.from_pretrained('roberta-base',return_dict=True,num_labels=2)
```
on a large set of pairs of sentences (couple of hundred thousands). I wanted to use `model.predict` and specify batch size, but there is no way to pass the below inputs (encoded_data is tokenization of the input data) to `model.predict`
```
attention_mask=encoded_data['attention_mask'],
token_type_ids=encoded_data['token_type_ids']
```
So what is the alternative way to do that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8136/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8135/comments | https://api.github.com/repos/huggingface/transformers/issues/8135/events | https://github.com/huggingface/transformers/issues/8135 | 731,799,559 | MDU6SXNzdWU3MzE3OTk1NTk= | 8,135 | Bort (Amazon's reduced BERT) | {
"login": "raulcarlomagno",
"id": 2282315,
"node_id": "MDQ6VXNlcjIyODIzMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2282315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raulcarlomagno",
"html_url": "https://github.com/raulcarlomagno",
"followers_url": "https://api.github.com/users/raulcarlomagno/followers",
"following_url": "https://api.github.com/users/raulcarlomagno/following{/other_user}",
"gists_url": "https://api.github.com/users/raulcarlomagno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raulcarlomagno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raulcarlomagno/subscriptions",
"organizations_url": "https://api.github.com/users/raulcarlomagno/orgs",
"repos_url": "https://api.github.com/users/raulcarlomagno/repos",
"events_url": "https://api.github.com/users/raulcarlomagno/events{/privacy}",
"received_events_url": "https://api.github.com/users/raulcarlomagno/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Any update on this one?",
"This was added in #9112"
] | 1,603 | 1,631 | 1,631 | NONE | null | # 🌟 New model addition
## Model description
Amazon Alexa researchers extract an optimal subset of architectural parameters for the BERT architecture by applying recent breakthroughs in algorithms for neural architecture search. The proposed optimal subset, “Bort,” is just 5.5 percent the effective size of the original BERT-large architecture (not counting the embedding layer), and 16 percent of its net size.
## Open source status
using mxnet and gluonnlp
paper https://arxiv.org/pdf/2010.10499.pdf
repo https://github.com/alexa/bort
* [X] the model implementation is available: (give details)
* [X] the model weights are available: (give details)
* [@adewynter] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8135/reactions",
"total_count": 42,
"+1": 36,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 5
} | https://api.github.com/repos/huggingface/transformers/issues/8135/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8134/comments | https://api.github.com/repos/huggingface/transformers/issues/8134/events | https://github.com/huggingface/transformers/issues/8134 | 731,777,911 | MDU6SXNzdWU3MzE3Nzc5MTE= | 8,134 | Error with multi-gpu training | {
"login": "nrjvarshney",
"id": 19836137,
"node_id": "MDQ6VXNlcjE5ODM2MTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/19836137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nrjvarshney",
"html_url": "https://github.com/nrjvarshney",
"followers_url": "https://api.github.com/users/nrjvarshney/followers",
"following_url": "https://api.github.com/users/nrjvarshney/following{/other_user}",
"gists_url": "https://api.github.com/users/nrjvarshney/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nrjvarshney/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nrjvarshney/subscriptions",
"organizations_url": "https://api.github.com/users/nrjvarshney/orgs",
"repos_url": "https://api.github.com/users/nrjvarshney/repos",
"events_url": "https://api.github.com/users/nrjvarshney/events{/privacy}",
"received_events_url": "https://api.github.com/users/nrjvarshney/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@nrjvarshney Hello did you managed this error? I am too having same error. Is there anybody to help?",
"Hello did you managed this error? I am too having same error. Is there anybody to help?"
] | 1,603 | 1,655 | 1,610 | NONE | null | I'm trying to build a QuestionAnswering model using transformers
It works with single gpu training but fails with multiple gpus.
Is there any bug in the below code?
```
class QAModel(pl.LightningModule):
def __init__(self):
super(QAModel, self).__init__()
self.model_type = parameters["BaseModel_type"]
self.config = AutoConfig.from_pretrained(model_name)
self.base_model = AutoModelForQuestionAnswering.from_pretrained(model_name, config = self.config)
self.tokenizer = tokenizer
def forward(self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
start_positions=None,
end_positions=None):
outputs = self.base_model(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
start_positions=start_positions,
end_positions=end_positions,
)
return outputs
def prepare_data(self):
self.train_dataset, _, _ = load_data(parameters["TRAIN_FILE"], is_training=True)
self.val_dataset, self.val_examples, self.val_features = load_data(parameters["DEV_FILE"], is_training=False)
self.test_dataset, self.test_examples, self.test_features = load_data(parameters["TEST_FILE"], is_training=False)
def train_dataloader(self):
return DataLoader(dataset=self.train_dataset, batch_size=parameters["batch_size"], shuffle=True, num_workers=parameters["num_threads"])
def val_dataloader(self):
return DataLoader(dataset=self.val_dataset, batch_size=parameters["batch_size"], num_workers=parameters["num_threads"])
def test_dataloader(self):
return DataLoader(dataset=self.test_dataset, batch_size=parameters["batch_size"], num_workers=parameters["num_threads"])
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=parameters["learning_rate"])
def training_step(self, batch, batch_idx):
inputs = {
"input_ids": batch[0],
"attention_mask": batch[1],
"token_type_ids": batch[2],
"start_positions": batch[3],
"end_positions": batch[4],
}
outputs = self.forward(**inputs)
loss = outputs[0]
return {"loss": loss}
def validation_step(self, batch, batch_idx):
inputs = {
"input_ids": batch[0],
"attention_mask": batch[1],
"token_type_ids": batch[2],
}
feature_indices = batch[3]
outputs = self.forward(**inputs)
model = QAModel()
trainer = pl.Trainer(gpus=-1, distributed_backend='dp', max_epochs=parameters["epochs"])
trainer.fit(model)
```
I get this error on running it with multiple gpus:
```
RuntimeError: grad can be implicitly created only for scalar outputs
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8134/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8133/comments | https://api.github.com/repos/huggingface/transformers/issues/8133/events | https://github.com/huggingface/transformers/pull/8133 | 731,759,594 | MDExOlB1bGxSZXF1ZXN0NTExODExODM4 | 8,133 | [examples] minimal version requirement run-time check in PL | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,604 | 1,604 | CONTRIBUTOR | null | This PR adds a run-time version check for PL. Via the warning for now. This is a follow up to https://github.com/huggingface/transformers/pull/7852#issuecomment-718144095
In the nature of development we don't constantly re-run `pip install -r requirements.txt` so often when a breaking change is introduced we have to signal to each other - hey, upgrade your PL or so. It'd be much simpler to let the program do this automatically for us.
for now one needs to update requirements.txt and the relevant .py files, but we could automate this to have one source to maintain - parse `requirements.txt` and pull the important min-version from there...
for now this is just a hardcoded plain check.
**My only suggestion is to make it an error** - there are too too many warnings in the test suite for someone to notice this yet another one - so I vote for making it an error.
@sshleifer, @sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8133/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8133",
"html_url": "https://github.com/huggingface/transformers/pull/8133",
"diff_url": "https://github.com/huggingface/transformers/pull/8133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8133.patch",
"merged_at": 1604427432000
} |
https://api.github.com/repos/huggingface/transformers/issues/8132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8132/comments | https://api.github.com/repos/huggingface/transformers/issues/8132/events | https://github.com/huggingface/transformers/pull/8132 | 731,747,093 | MDExOlB1bGxSZXF1ZXN0NTExODAxNDU4 | 8,132 | New template for example and MLM example. | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | # What does this PR do?
This PR adds a cookiecutter template to add a new example and experiments with it to add the `run_mlm` new script. It runs with the same results as the old version. I'll also add a `run_plm` specific to XLNet then update the README and remove the old script.
Side note: the part for random masking applied in a data collator can become platform agnostic later on, if datasets adds a lazy map method.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8132/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8132",
"html_url": "https://github.com/huggingface/transformers/pull/8132",
"diff_url": "https://github.com/huggingface/transformers/pull/8132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8132.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8131/comments | https://api.github.com/repos/huggingface/transformers/issues/8131/events | https://github.com/huggingface/transformers/pull/8131 | 731,732,849 | MDExOlB1bGxSZXF1ZXN0NTExNzg5NDI1 | 8,131 | [s2s test] cleanup | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | This PR introduces no functional change, just doing a clean up left behind from the initial split and copy of the distillation tests...
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8131/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8131",
"html_url": "https://github.com/huggingface/transformers/pull/8131",
"diff_url": "https://github.com/huggingface/transformers/pull/8131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8131.patch",
"merged_at": 1603918236000
} |
https://api.github.com/repos/huggingface/transformers/issues/8130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8130/comments | https://api.github.com/repos/huggingface/transformers/issues/8130/events | https://github.com/huggingface/transformers/pull/8130 | 731,725,439 | MDExOlB1bGxSZXF1ZXN0NTExNzgzMTQz | 8,130 | Name or path should be added on configuration as well | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | Close https://github.com/huggingface/transformers/issues/8035
Currently a configuration initialized with
```
config = BertConfig.from_pretrained(model_name)
```
does not have the `_model_name_or_path` attribute, whereas a configuration intialized from a model with
```py
model = BertModel.from_pretrained(model_name)
```
does.
This fixes the discrepancy and fixes the failing test in the process. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8130/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8130",
"html_url": "https://github.com/huggingface/transformers/pull/8130",
"diff_url": "https://github.com/huggingface/transformers/pull/8130.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8130.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8129/comments | https://api.github.com/repos/huggingface/transformers/issues/8129/events | https://github.com/huggingface/transformers/pull/8129 | 731,725,335 | MDExOlB1bGxSZXF1ZXN0NTExNzgzMDY1 | 8,129 | Fix typo in `AutoModelForMaskedLM` docs | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8129/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8129",
"html_url": "https://github.com/huggingface/transformers/pull/8129",
"diff_url": "https://github.com/huggingface/transformers/pull/8129.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8129.patch",
"merged_at": 1603914749000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8128/comments | https://api.github.com/repos/huggingface/transformers/issues/8128/events | https://github.com/huggingface/transformers/pull/8128 | 731,702,086 | MDExOlB1bGxSZXF1ZXN0NTExNzYzNzg0 | 8,128 | test style | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8128/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8128",
"html_url": "https://github.com/huggingface/transformers/pull/8128",
"diff_url": "https://github.com/huggingface/transformers/pull/8128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8128.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/8127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8127/comments | https://api.github.com/repos/huggingface/transformers/issues/8127/events | https://github.com/huggingface/transformers/issues/8127 | 731,694,539 | MDU6SXNzdWU3MzE2OTQ1Mzk= | 8,127 | Use pipeline on fine tuned model | {
"login": "prince14322",
"id": 19497571,
"node_id": "MDQ6VXNlcjE5NDk3NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19497571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prince14322",
"html_url": "https://github.com/prince14322",
"followers_url": "https://api.github.com/users/prince14322/followers",
"following_url": "https://api.github.com/users/prince14322/following{/other_user}",
"gists_url": "https://api.github.com/users/prince14322/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prince14322/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prince14322/subscriptions",
"organizations_url": "https://api.github.com/users/prince14322/orgs",
"repos_url": "https://api.github.com/users/prince14322/repos",
"events_url": "https://api.github.com/users/prince14322/events{/privacy}",
"received_events_url": "https://api.github.com/users/prince14322/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
I have fine tuned 'roberta-large' model according to my dataset. It is a sequence classification task
```
MODEL_NAME = 'roberta-large'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
# Prediction function
def predict(sent):
sequence = tokenizer.encode_plus(sent, return_tensors="pt")['input_ids'].to(device)
logits = model(sequence)[0]
```
The above works fine but now I would like to use this model in pipeline like we have one for question-answering
```
nlp = pipeline('question-answering', model='distilbert-base-cased-distilled-squad', tokenizer='bert-base-cased')
```
Any example would help.
Thank You.
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. --> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8127/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8126/comments | https://api.github.com/repos/huggingface/transformers/issues/8126/events | https://github.com/huggingface/transformers/pull/8126 | 731,669,497 | MDExOlB1bGxSZXF1ZXN0NTExNzM2Mzg2 | 8,126 | Update CI cache | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,603 | 1,603 | 1,603 | MEMBER | null | Update the CI cache as torch 1.7 has been released | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8126/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8126",
"html_url": "https://github.com/huggingface/transformers/pull/8126",
"diff_url": "https://github.com/huggingface/transformers/pull/8126.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8126.patch",
"merged_at": 1603907984000
} |
https://api.github.com/repos/huggingface/transformers/issues/8125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8125/comments | https://api.github.com/repos/huggingface/transformers/issues/8125/events | https://github.com/huggingface/transformers/issues/8125 | 731,669,109 | MDU6SXNzdWU3MzE2NjkxMDk= | 8,125 | Cannot load saved tokenizer using AutoTokenizer | {
"login": "trias702",
"id": 25867060,
"node_id": "MDQ6VXNlcjI1ODY3MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/25867060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trias702",
"html_url": "https://github.com/trias702",
"followers_url": "https://api.github.com/users/trias702/followers",
"following_url": "https://api.github.com/users/trias702/following{/other_user}",
"gists_url": "https://api.github.com/users/trias702/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trias702/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trias702/subscriptions",
"organizations_url": "https://api.github.com/users/trias702/orgs",
"repos_url": "https://api.github.com/users/trias702/repos",
"events_url": "https://api.github.com/users/trias702/events{/privacy}",
"received_events_url": "https://api.github.com/users/trias702/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! Indeed, I wouldn't say this is a bug but more of a limitation of the `AutoTokenizer` class that has to rely on the model configuration in order to guess which tokenizer is affiliated with the model. Since you're not interacting with the configuration in the configuration anywhere here, and, therefore, are not saving the model configuration in `TEST/tokenizer`, the AutoTokenizer cannot guess from which tokenizer to load.\r\n\r\nOne way to go around this limitation is to either specify the configuration when loading the tokenizer for the second time:\r\n```py\r\nfrom transformers import AutoTokenizer, AutoConfig\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('roberta-base')\r\ntokenizer.save_pretrained('TEST/tokenizer')\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('TEST/tokenizer', config=AutoConfig.from_pretrained(\"roberta-base\"))\r\n```\r\nAnother way would be to save the configuration in the initial folder:\r\n```py\r\nfrom transformers import AutoTokenizer, AutoConfig\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('roberta-base')\r\nconfig = AutoConfig.from_pretrained('roberta-base')\r\n\r\ntokenizer.save_pretrained('TEST/tokenizer')\r\nconfig.save_pretrained('TEST/tokenizer')\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('TEST/tokenizer')\r\n```\r\n\r\nIn any case, the documentation about this should be improved.",
"Thank you for that reply, I very much appreciate it!\r\n\r\nWhat about the following, would this work also?\r\n\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained('roberta-base')\r\n# make changes to tokenizer, for example add custom tokens\r\ntokenizer.save_pretrained('TEST/tokenizer')\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('roberta-base')\r\ntokenizer = tokenizer.from_pretrained('TEST/tokenizer')\r\n```\r\n\r\nIf you do it this way, when you call the last line of the code, will you restore any changes you previously made to the tokenizer?\r\n\r\nFinally, for what's it worth, I do believe that the way the library is doing it now is wrong, from a design philosophy perspective. The Tokenizers should be able to stand completely apart from their models, as they are their own classes, with their own configs and config format. You shouldn't need the Model Config in order to save down and restore a tokenizer, because you can do it entirely without the model if you call the direct model tokenizer class:\r\n\r\n```\r\ntokenizer = RobertaTokenizer.from_pretrained('roberta-base')\r\ntokenizer.save_pretrained('TEST/tokenizer')\r\n\r\ntokenizer = RobertaTokenizer.from_pretrained('TEST/tokenizer')\r\n# WORKS\r\n```\r\n\r\nSo it really should not make any difference if you execute the same design pattern, but from a model-agnostic way, as in my original example:\r\n\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained('roberta-base')\r\ntokenizer.save_pretrained('TEST/tokenizer')\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('TEST/tokenizer')\r\n# ERROR, but really should work\r\n```\r\n\r\nThe AutoTokenizer class should just be about Tokenizer, and should not be calling AutoConfig (which is for models). Basically, you need an AutoTokenConfig class instead, which decouples the two. Calling `save_pretrained` on a Tokenizer (any tokenizer) should save all the information about it (including it's model-class, for example RobertaTokenizer) such that you can then load it from disk using AutoTokenizer, and the AutoTokenizer would be smart enough to check the files on disk, read some JSON info, and say \"Ah yes, this should be a RobertaTokenizer\" and then return to you a RobertaTokenizer object, even though you called AutoTokenizer.from_pretrained. In fact, as it stands now, this information about tokenizer type is already being written to disk, it's just not being read back by the AutoTokenizer. If you created an AutoTokenizerConfig class with its own tokenizer-specific config reading-from-disk methods, then you could easily accomplish this.\r\n\r\nThe reason this would be a powerful design pattern to have is you could make complex language modelling pipelines across different scripts and the tokenizer would only need to be class specified once, at the topmost script.\r\n\r\nFor example, say you have a script which preprocesses a custom corpus for downstream language modelling, and it does this using Shelve, creating compressed records ready to be read by a downstream collator class. But in the same directory, it also saves down the (custom) tokenizer used, let's say a modified RobertaTokenizer.\r\n\r\nThe downstream script would not need to know anything about RobertaTokenizer, all it does is read in the Shelve records, and loads the tokenizer using AutoTokenizer.from_pretrained, and then just runs what it needs to run, and hands its results to yet another downstream process, and then that process also just loads the tokenizer using AutoTokenizer.from_pretrained, and doesn't need to know anything about what type of tokenizer it is, because it just uses the PretrainedTokenizer base class methods.\r\n\r\nSo the only script that ever knew about RobertaTokenizer was the very first one, and it saved it using save_pretrained, and then all of the downstream worker scripts just load that tokenizer using AutoTokenizer.from_pretrained. This allows all the downstream scripts to be model-agnostic, and not need to know about RobertaTokenizer at all, meaning they could work with any PretrainedTokenizer at all.\r\n\r\nThis is a very efficient pipeline that makes full use of the abstract base classes like PretrainedTokenizer. Otherwise you need each of your downstream scripts to be model-specific, because they need to be told to use RobertaTokenizer instead of BertTokenizer instead of GPT2Tokenizer, etc.\r\n\r\nThe only thing that's missing to make this all work is for AutoTokenizer.from_pretrained to work in the manner which I have original tried to make it work.",
"While we aim for tokenizers and models to be pairs and not standalone classes, I do agree it would be better from a user perspective to put the tokenizer's class directly in the `tokenizer_config.json`, so as to work in the use-case that you mention here. We could add a flag to the configuration, similar to the `architecture` that we have in the model configuration.\r\n\r\nThoughts @julien-c, @thomwolf ?",
"Yes, sounds good to me indeed",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Win10 x64 (1607 Build 14393.3866)
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?): None
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@mfuntowicz
## Information
It appears that you can save a tokenizer to disk in a model agnostic way, but you cannot load it back in a model agnostic way. Is this a bug or by design?
## To reproduce
Steps to reproduce the behavior:
```
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
tokenizer.save_pretrained('TEST/tokenizer')
tokenizer = AutoTokenizer.from_pretrained('TEST/tokenizer')
# ERROR
```
The error you get is because the config argument is None, which means AutoTokenizer calls AutoConfig.from_pretrained, which utilises file_utils.CONFIG_NAME, however tokenizer.save_pretrained uses tokenization_utils_base.TOKENIZER_CONFIG_FILE instead, so they're not compatible with one another.
## Expected behavior
I would assume that calling AutoTokenizer.from_pretrained would be able to load and instantiate the correct model tokenizer without the user having to directly import the model tokenizer class first (e.g. RobertaTokenizer.from_pretrained). This would help a lot in moving to a model agnostic way of handling tokenizers, which I feel is the goal of the AutoTokenizer class. The fact that it can't load a tokenizer from disk seems to be a bug, unless there is a different way of doing this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8125/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8124/comments | https://api.github.com/repos/huggingface/transformers/issues/8124/events | https://github.com/huggingface/transformers/issues/8124 | 731,608,316 | MDU6SXNzdWU3MzE2MDgzMTY= | 8,124 | [s2s] distributed eval gets stuck on error w/ multigpu | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"same happens with `finetune.py` - happened in another run when it hit OOM. So basically any error.",
"@williamFalcon @SeanNaren (lightning friends)\r\n\r\nDo you guys have a clever way to collect failures in your multigpu tests?\r\nWhen something breaks, our multigpu test hangs.\r\n",
"yes... good questions haha. \r\n\r\nSo, some things we know:\r\n\r\n1. multi gpu tests should run one per test (ie: don’t parametrize via pytest). Seems that the way pytest starts an experiment does not play well with pytorch distributed. \r\n\r\n2. ddp in lightning needs to use subprocess inside a test and call an external file. \r\nhttps://github.com/PyTorchLightning/pytorch-lightning/blob/master/tests/backends/test_ddp.py#L57\r\n\r\n3. ddp spawn tests need to adhere to that single test per function call i mentioned in 1. pytest parametrized ddp testss WILL freeze the build. ",
"Thank you for the insights, @williamFalcon \r\n\r\nThat is the case already - I discovered the subprocess idea by looking at your distributed ddp test ;) And none of these are parametrized.\r\n\r\nSo it must be something else.\r\n\r\np.s. btw, have you tried the `parameterized` module https://pypi.org/project/parameterized/? It's more flexible than `pytest`'s `parametrize` - perhaps it won't have the same impact (but that's unrelated to this issue).",
"oh, and to clarify, this has nothing to do with testing. The hanging happens in the standalone scripts. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,603 | 1,610 | 1,610 | CONTRIBUTOR | null | `examples/seq2seq/distillation.py` and probably others remain hanging on internal error when run w/ multiple gpus (2 here):
```
rm -r /tmp/tmpqajqhzwo; PYTHONPATH="src" python examples/seq2seq/distillation.py --supervise_forward --normalize_hidden --label_smoothing=0.0 --eval_beams=1 --val_metric=loss --save_top_k=1 --adafactor --early_stopping_patience=-1 --logger_name=default --length_penalty=0.5 --cache_dir= --task=summarization --num_workers=2 --alpha_hid=0 --freeze_embeds --sortish_sampler --student_decoder_layers=1 --val_check_interval=0.5 --output_dir=/tmp/tmpqajqhzwo --no_teacher --fp16_opt_level=O1 --gpus=2 --max_grad_norm=1.0 --do_train --do_predict --accumulate_grad_batches=1 --seed=42 --model_name_or_path=sshleifer/tinier_bart --config_name= --tokenizer_name=facebook/bart-large --learning_rate=0.3 --lr_scheduler=linear --weight_decay=0.0 --adam_epsilon=1e-08 --warmup_steps=0 --max_epochs=2 --train_batch_size=1 --eval_batch_size=2 --max_source_length=12 --max_target_length=12 --val_max_target_length=12 --test_max_target_length=12 --n_train=-1 --n_val=-1 --n_test=-1 --student_encoder_layers=1 --freeze_encoder --data_dir=examples/seq2seq/test_data/wmt_en_ro --alpha_mlm=0.2 --alpha_ce=0.8 --teacher=sshleifer/bart-tiny-random
```
last output:
```
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2
Traceback (most recent call last):
File "/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/distillation.py", line 281, in <module>
distill_main(args)
File "/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/distillation.py", line 269, in distill_main
check_output_dir(args, expected_items=3)
File "/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/utils.py", line 641, in check_output_dir
raise ValueError(
ValueError: Output directory (/tmp/tmpqajqhzwo) already exists and has 7 items in it (expected 3 items). Use --overwrite_output_dir to overcome.
```
and now it hangs, holding onto the gpu. Can't even Ctrl-C the process - needed to suspend+kill manually.
I know that adding `--overwrite_output_dir` will remove the error, but this is not the issue. It shouldn't hang on error (e.g. the test suite needs to continue running in such event).
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8124/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8123/comments | https://api.github.com/repos/huggingface/transformers/issues/8123/events | https://github.com/huggingface/transformers/pull/8123 | 731,602,423 | MDExOlB1bGxSZXF1ZXN0NTExNjgwMjc0 | 8,123 | [DOC] Improve pipeline() docstrings for config and tokenizer | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger I made the change as you requested. Not sure why CI is failing on build_doc. Seems to have to do with some env installation.",
"The failure is spurious (basically the new version of pytorch is not cached on the CI and it fails to download it sometimes). Thanks for th fix!"
] | 1,603 | 1,603 | 1,603 | COLLABORATOR | null | As currently written, it was not clear to me which arguments were needed when using a non-default model in `pipeline()`. It seemed that when you provided a non-default `model`, that you still needed to manually change the `config` and `tokenizer` because otherwise the "task's default will be used". In practice, though, the pipeline is smart enough to automatically choose the right config/tokenizer for the given model. This PR clarifies that a bit in the docstrings/documentation, by explaining exactly which priorities are used when loading the tokenizer. A small change was made for `config`, too.
Admittedly, the wording for the tokenizer part is a bit off (programmatical, even), but I think it should make clear how the right tokenizer is loaded.
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8123/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8123",
"html_url": "https://github.com/huggingface/transformers/pull/8123",
"diff_url": "https://github.com/huggingface/transformers/pull/8123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8123.patch",
"merged_at": 1603905973000
} |
https://api.github.com/repos/huggingface/transformers/issues/8122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8122/comments | https://api.github.com/repos/huggingface/transformers/issues/8122/events | https://github.com/huggingface/transformers/issues/8122 | 731,597,805 | MDU6SXNzdWU3MzE1OTc4MDU= | 8,122 | behaviour of ZeroShotClassification using facebook/bart-large-mnli is different on online demo vs local machine | {
"login": "turmeric-blend",
"id": 62788745,
"node_id": "MDQ6VXNlcjYyNzg4NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/62788745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turmeric-blend",
"html_url": "https://github.com/turmeric-blend",
"followers_url": "https://api.github.com/users/turmeric-blend/followers",
"following_url": "https://api.github.com/users/turmeric-blend/following{/other_user}",
"gists_url": "https://api.github.com/users/turmeric-blend/gists{/gist_id}",
"starred_url": "https://api.github.com/users/turmeric-blend/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/turmeric-blend/subscriptions",
"organizations_url": "https://api.github.com/users/turmeric-blend/orgs",
"repos_url": "https://api.github.com/users/turmeric-blend/repos",
"events_url": "https://api.github.com/users/turmeric-blend/events{/privacy}",
"received_events_url": "https://api.github.com/users/turmeric-blend/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
}
] | [
"Replace `AutoModel` with `AutoModelForSequenceClassification`. The former won't add the sequence classification head, i.e. it will use `BartModel` instead of `BartForSequenceClassification`, so the pipeline is trying to use just the outputs of the encoder instead of the NLI predictions in your snippet.",
"@joeddav that fixed it thanks !",
"Have the same problem:\r\n\r\nconda environment: Python 3.7.9\r\n```\r\npip3 install torch==1.6\r\npip3 install transformers\r\n```\r\n\r\nRunning\r\n\r\n```\r\nfrom transformers import AutoModelForSequenceClassification\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"facebook/bart-large-mnli\")\r\n```\r\n\r\nResults in message:\r\n\r\n> Some weights of the model checkpoint at facebook/bart-large-mnli were not used when initializing BartForSequenceClassification: ['model.encoder.version', 'model.decoder.version']\r\n> - This IS expected if you are initializing BartForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n> - This IS NOT expected if you are initializing BartForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n\r\n@turmeric-blend: How is my setup different from yours? \r\n",
"actually the error message was still there after the fix, but the scores running on local machine were consistent with the online demo @gustavengstrom \r\n\r\nany ideas why is there still the warning message @joeddav ?",
"Yeah that warning isn't a concern. It's just letting you know that some of the parameters checkpointed in the pretrained model were not able to be matched with the model class, but in this case it's just a couple of meta-fields (encoder/decoder version), so your weights should be matched up fine."
] | 1,603 | 1,603 | 1,603 | NONE | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Ubuntu 20.04
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (GPU:Yes)
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sshleifer
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large-mnli
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
First I tried the hosted demo online at huggingface, which gives me a very high score of **0.99 for travelling (as expected)**:

Then I tried to run the code on my local machine, which returns **very different scores for all labels** (poor scores):
```
from transformers import pipeline
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
model = AutoModel.from_pretrained("facebook/bart-large-mnli")
zsc = pipeline(task='zero-shot-classification', tokenizer=tokenizer, model=model)
sequences = 'one day I will see the world'
candidate_labels = ['travelling', 'cooking', 'dancing']
results = zsc(sequences=sequences, candidate_labels=candidate_labels, multi_class=False)
print(results)
>>>{'sequence': 'one day I will see the world',
'labels': ['travelling', 'dancing', 'cooking'],
'scores': [0.5285395979881287, 0.2499372661113739, 0.22152313590049744]}
```
I **got this warning message when initializing the model**:
`model = AutoModel.from_pretrained("facebook/bart-large-mnli")`
```
Some weights of the model checkpoint at facebook/bart-large-mnli were not used when initializing BartModel: ['model.encoder.version', 'model.decoder.version']
- This IS expected if you are initializing BartModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BartModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
## Expected behavior
Code on my local machine's **_score_** to be quite similar to the online demo.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8122/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.