url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/9329 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9329/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9329/comments | https://api.github.com/repos/huggingface/transformers/issues/9329/events | https://github.com/huggingface/transformers/issues/9329 | 775,307,952 | MDU6SXNzdWU3NzUzMDc5NTI= | 9,329 | how to checkpoint all the validation scores in huggingface trainer | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Interested in this as well. Do not find a solution from blog [\"How to monitor both train and validation metrics at the same step?\"](https://discuss.huggingface.co/t/how-to-monitor-both-train-and-validation-metrics-at-the-same-step/1301).",
"> Hi\r\n> I want to find the best model per evaluation score. Could you please give me more info, how I can checkpoint all evaluation scores in each step of training to find the best model? thanks\r\n\r\nI think I figure it out:\r\n```diff\r\ntraining_args = TrainingArguments(\r\n output_dir='./results', # output directory\r\n num_train_epochs=3, # total number of training epochs\r\n per_device_train_batch_size=16, # batch size per device during training\r\n per_device_eval_batch_size=64, # batch size for evaluation\r\n warmup_steps=500, # number of warmup steps for learning rate scheduler\r\n weight_decay=0.01, # strength of weight decay\r\n logging_dir='./logs', # directory for storing logs\r\n logging_steps=100,\r\n++ evaluation_strategy='steps',\r\n)\r\n```",
"> Interested in this as well. Do not find a solution from blog [\"How to monitor both train and validation metrics at the same step?\"](https://discuss.huggingface.co/t/how-to-monitor-both-train-and-validation-metrics-at-the-same-step/1301).\r\n\r\n\r\n\r\n> Hi\r\n> I want to find the best model per evaluation score. Could you please give me more info, how I can checkpoint all evaluation scores in each step of training to find the best model? thanks\r\n\r\nCorrespondingly, I put \r\n```diff\r\n# https://huggingface.co/transformers/training.html\r\n#metric = load_metric('glue', 'mrpc')\r\ndef compute_metrics(p):#: EvalPrediction\r\n# def compute_metrics(p):\r\n preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions\r\n# preds = np.squeeze(preds) if is_regression else np.argmax(preds, axis=1)\r\n preds = np.argmax(preds, axis=1)\r\n# if data_args.task_name is not None:\r\n# result = metric.compute(predictions=preds, references=p.label_ids)\r\n# if len(result) > 1:\r\n# result[\"combined_score\"] = np.mean(list(result.values())).item()\r\n# return result\r\n# elif is_regression:\r\n# return {\"mse\": ((preds - p.label_ids) ** 2).mean().item()}\r\n# else:\r\n return {\"accuracy\": (preds == p.label_ids).astype(np.float32).mean().item()}\r\n\r\ntrainer = Trainer(\r\n model=model, # the instantiated 🤗 Transformers model to be trained\r\n args=training_args, # training arguments, defined above\r\n train_dataset=train_dataset, # training dataset\r\n eval_dataset=val_dataset, # evaluation dataset\r\n++ compute_metrics = compute_metrics\r\n)\r\n\r\ntrainer.train()\r\n```\r\nand result in output:\r\n\r\n\r\n",
"\r\n- However, in terms of `Accuracy`, Not for sure it is on training dataset or validation dataset.",
"`Accuracy` here is for validation dataset.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,609 | 1,619 | 1,619 | NONE | null | Hi
I want to find the best model per evaluation score. Could you please give me more info, how I can checkpoint all evaluation scores in each step of training to find the best model? thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9329/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9329/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9328 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9328/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9328/comments | https://api.github.com/repos/huggingface/transformers/issues/9328/events | https://github.com/huggingface/transformers/issues/9328 | 775,288,750 | MDU6SXNzdWU3NzUyODg3NTA= | 9,328 | expected str, bytes or os.PathLike object, not NoneType | {
"login": "hjzhang1018",
"id": 58802867,
"node_id": "MDQ6VXNlcjU4ODAyODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/58802867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hjzhang1018",
"html_url": "https://github.com/hjzhang1018",
"followers_url": "https://api.github.com/users/hjzhang1018/followers",
"following_url": "https://api.github.com/users/hjzhang1018/following{/other_user}",
"gists_url": "https://api.github.com/users/hjzhang1018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hjzhang1018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hjzhang1018/subscriptions",
"organizations_url": "https://api.github.com/users/hjzhang1018/orgs",
"repos_url": "https://api.github.com/users/hjzhang1018/repos",
"events_url": "https://api.github.com/users/hjzhang1018/events{/privacy}",
"received_events_url": "https://api.github.com/users/hjzhang1018/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @hjzhang1018,\r\n\r\nThanks for your bug report. Could you try to run your command again - I think it should be fixed now: https://huggingface.co/seyonec/SMILES_tokenized_PubChem_shard00_160k/commit/7ef67531cfe96d0e2aa3ae913352c8e9a8c1df4f",
"@seyonechithrananda I added some files to your repo here: https://huggingface.co/seyonec/SMILES_tokenized_PubChem_shard00_160k/commit/7ef67531cfe96d0e2aa3ae913352c8e9a8c1df4f - feel free to take a look and see whether this is OK for you. It might be possible that other models require this change as well. ",
"@hjzhang1018 @patrickvonplaten Hi all, thanks for the fix! I believe the reason this may be happening is because the tokenizer we use is custom (a subclass of BertTokenizer) and thus we run into the issue of fitting directly with AutoTokenizer. We have a PR for a tutorial in the DeepChem [library](https://github.com/deepchem/deepchem/pull/2302), which demonstrates how to call our subclass instead of using AutoTokenizer. If you refer to Part 23 hopefully that is of use!\r\n\r\nDocs: https://deepchem.readthedocs.io/en/latest/api_reference/tokenizers.html",
"Link to SmilesTokenizer class which these models utilize: https://github.com/deepchem/deepchem/blob/master/deepchem/feat/smiles_tokenizer.py#L39-L282",
"Just read the changes, it looks like @patrickvonplaten directly converted the vocab.txt file sufficient for BertTokenizer into the vocab.json format necessary for RoBERTa tokenizers, which should run smoothly. I will try this out once I get more time but this fix should work. Thanks a lot for the quick fix!",
"Is there a way to add this change to the other models with the 'SmilesTokenizer', @patrickvonplaten? Thanks again for the support.",
"@patrickvonplaten Thank you so much for your help! Now this worked!\r\n@seyonechithrananda Thank you for the explanation. I'll read the tutorials carefully. Very useful!",
"> Is there a way to add this change to the other models with the 'SmilesTokenizer', @patrickvonplaten? Thanks again for the support.\r\n\r\nYes on simply has to create an empty 'merges.txt' file and create `vocab.json` from vocab.txt",
"Hi, I have the same issues here, I have a custom roberta model and I am using https://github.com/UKPLab/sentence-transformers. Here is the full detail of my problem: https://github.com/UKPLab/sentence-transformers/issues/658\r\n\r\nThe output after training from this sentence transformers yield files that doesn't contain vocab.json or vocab.txt. But I have a file called `unigram.json` and it looks something like this:\r\n\r\n\r\n\r\n\r\n```\r\n{\r\n \"unk_id\": 0,\r\n \"vocab\": [\r\n [\r\n \"<unk>\",\r\n 0.0\r\n ],\r\n [\r\n \"<sep>\",\r\n 0.0\r\n ],\r\n [\r\n \"<pad>\",\r\n 0.0\r\n ],\r\n ....\r\n ]\r\n}\r\n```\r\n\r\nI also faced this TypeError, the same as the title of this issue, when trying to use AutoTokenizer",
"In short every Roberta-like Tokenizer requires two files:\r\n\r\n1) One merges.txt file. This file describes the BPE algorithm (which letters are merged in which order)\r\n2) One vocab.json file. This file describes the vocabulary.\r\n\r\nTo get an idea of how the format of these files should be, I'd recommend taking a look at some the files in `roberta-base` here:https://huggingface.co/roberta-base/tree/main\r\n\r\n@seyonechithrananda, I think in your case, you don't need a merges.txt file because of the small vocabulary and because there are no words, just tokens\r\n@hjzhang1018 If the other library: UKPLab/sentence-transformers uses the same format for loading/saving files that we do, then file should be renamed to be called `vocab.json` and should have a different format (check out the format here: https://huggingface.co/roberta-base/blob/main/vocab.json).",
"So after I call `sentence-transformers` save which gives me back 6 files:\r\n1. I will need to rename `unigram.json` to `vocab.json`\r\n2. Change the format of `unigram.json` to follows `vocab.json` structure\r\n3. Create an empty `merge.txt` file\r\n\r\nAnd currently my `unigram.json` contains a word weight:\r\n```\r\n{\r\n \"unk_id\": 0,\r\n \"vocab\": [\r\n [\r\n \"<unk>\",\r\n 0.0\r\n ],\r\n [\r\n \"<sep>\",\r\n 0.0\r\n ],\r\n [\r\n \"<pad>\",\r\n 0.0\r\n ],\r\n [\r\n \"<cls>\",\r\n 0.0\r\n ],\r\n [\r\n \"<mask>\",\r\n 0.0\r\n ],\r\n [\r\n \",\",\r\n -3.1215689182281494\r\n ],\r\n [\r\n \".\",\r\n -3.642984628677368\r\n ],\r\n [\r\n \"a\",\r\n -4.921720027923584\r\n ],\r\n ......\r\n ]\r\n}\r\n```\r\n\r\nDo I just ignore all the weights and created a new file `vocab.json` with this format ?\r\n```\r\n{\r\n \"<unk>\": 0,\r\n \"<sep>\": 1,\r\n \"<pad>\": 2,\r\n \"<cls>\": 3,\r\n \"<mask>\": 4,\r\n \",\": 5,\r\n \".\": 6,\r\n \"a\": 7,\r\n .......\r\n}\r\n```",
"I started getting this error only about an hour ago without any changes on my side in my old Colab notebook. There must be some version change in Colab env that triggers this error. Any ideas what it might be?\r\n",
"@lenyabloko could you open a new issue with the issue you're facing and how to reproduce it? Thank you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,609 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Darwin-18.6.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
tokenizers: @mfuntowicz
Trainer: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): I don't know
The problem arises when using:
* [ ] the official example scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below) I was trying to use this for further transfer learning.
## To reproduce
Steps to reproduce the behavior(the snippet I used):
```
import deepchem as dc
import tensorflow as tf
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("seyonec/SMILES_tokenized_PubChem_shard00_160k")
model = AutoModelForMaskedLM.from_pretrained("seyonec/SMILES_tokenized_PubChem_shard00_160k")
```
Then I got this error message: "TypeError: expected str, bytes or os.PathLike object, not NoneType".
I appreciate any help/suggestions! Thanks very much.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9328/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9327 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9327/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9327/comments | https://api.github.com/repos/huggingface/transformers/issues/9327/events | https://github.com/huggingface/transformers/issues/9327 | 775,249,141 | MDU6SXNzdWU3NzUyNDkxNDE= | 9,327 | No module named 'transformers.modeling_albert' | {
"login": "hjzhang1018",
"id": 58802867,
"node_id": "MDQ6VXNlcjU4ODAyODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/58802867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hjzhang1018",
"html_url": "https://github.com/hjzhang1018",
"followers_url": "https://api.github.com/users/hjzhang1018/followers",
"following_url": "https://api.github.com/users/hjzhang1018/following{/other_user}",
"gists_url": "https://api.github.com/users/hjzhang1018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hjzhang1018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hjzhang1018/subscriptions",
"organizations_url": "https://api.github.com/users/hjzhang1018/orgs",
"repos_url": "https://api.github.com/users/hjzhang1018/repos",
"events_url": "https://api.github.com/users/hjzhang1018/events{/privacy}",
"received_events_url": "https://api.github.com/users/hjzhang1018/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @hjzhang1018,\r\n\r\nThis does not seem to be a bug in Transformers, but rather in `seyonechithrananda/simpletransformers.git` so I'm not sure here is the correct place to post the issue. I think one has to change the line `from transformers.modeling_albert import ....` to `from transformers.models.albert.modeling_albert import ...` in the respective repo."
] | 1,609 | 1,609 | 1,609 | NONE | null | - Platform: Colab
- Python version:
- PyTorch version (GPU?):GPU
- Tensorflow version (GPU?):GPU
- Using GPU in script?:Yes
examples/token-classification: @stefan-it
-->
The problem arises when using:
* [ ] the official example scripts: I'm following the tutorial "23_Transfer_Learning_With_ChemBERTa_Transformers_Pt_2.ipynb" to reproduce the results. However I got this error message " No module named 'transformers.modeling_albert". I cannot figure out the reason. The following is the snippet I used:
```
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem
deepchem.__version__
from rdkit import Chem
!git clone https://github.com/NVIDIA/apex
!cd /content/apex
!pip install -v --no-cache-dir /content/apex
!pip install transformers
!pip install git+https://github.com/seyonechithrananda/simpletransformers.git@pip
!pip install wandb
!cd ..
!git clone https://github.com/seyonechithrananda/bert-loves-chemistry.git
%cd /content/bert-loves-chemistry
import os
import numpy as np
import pandas as pd
from typing import List
# import molnet loaders from deepchem
from deepchem.molnet import load_bbbp, load_clearance, load_clintox, load_delaney, load_hiv, load_qm7, load_tox21
from rdkit import Chem
# import MolNet dataloder from bert-loves-chemistry fork
from utils.molnet_dataloader import load_molnet_dataset, write_molnet_dataset_for_chemprop
tasks, (train_df, valid_df, test_df), transformers = load_molnet_dataset("clintox", tasks_wanted=None)
from simpletransformers.classification import ClassificationModel
import logging
logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)
```
Any suggestions and help are appreciated! Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9327/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9326 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9326/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9326/comments | https://api.github.com/repos/huggingface/transformers/issues/9326/events | https://github.com/huggingface/transformers/issues/9326 | 775,236,458 | MDU6SXNzdWU3NzUyMzY0NTg= | 9,326 | Issue with 'char_to_token()' function of DistilBertTokenizerFast | {
"login": "PremalMatalia",
"id": 42915124,
"node_id": "MDQ6VXNlcjQyOTE1MTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/42915124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PremalMatalia",
"html_url": "https://github.com/PremalMatalia",
"followers_url": "https://api.github.com/users/PremalMatalia/followers",
"following_url": "https://api.github.com/users/PremalMatalia/following{/other_user}",
"gists_url": "https://api.github.com/users/PremalMatalia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PremalMatalia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PremalMatalia/subscriptions",
"organizations_url": "https://api.github.com/users/PremalMatalia/orgs",
"repos_url": "https://api.github.com/users/PremalMatalia/repos",
"events_url": "https://api.github.com/users/PremalMatalia/events{/privacy}",
"received_events_url": "https://api.github.com/users/PremalMatalia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @PremalMatalia, \r\n\r\nCould you please provide a copy/paste ready code-snippet that can be used to reproduce the error. By copy/past ready code snippet I mean something like:\r\n\r\n```python\r\nfrom transformers import DistilBertTokenizerFast\r\ntokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')\r\n\r\n# ... add all the necessary code here to be able to reproduce your error\r\n```\r\n\r\n. Thanks! ",
"Hello Patrick,\r\nPlease find entire code starting from SQuAD 2.0 training data download to encoding to adding start and end position as below:\r\n\r\n```python\r\n!pip install wget\r\n!pip install transformers==4.0.1\r\n\r\nimport wget\r\nimport json\r\nfrom pathlib import Path\r\nimport os\r\nimport json\r\nfrom transformers import DistilBertTokenizerFast,TFDistilBertForQuestionAnswering\r\nimport tensorflow as tf\r\n\r\n# Import training data\r\n!mkdir squad\r\ntrain_source = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json'\r\ntrain_dest = 'squad/train-v2.0.json'\r\nwget.download(train_source,train_dest)\r\n\r\n## Function to extract context, questions and answers\r\ndef read_squad(path,dataset='train'):\r\n \r\n contexts = []\r\n questions = []\r\n question_ids = []\r\n answers = []\r\n blank_counter = 0\r\n append_flag = False\r\n\r\n with open(path) as f:\r\n data = json.load(f)\r\n \r\n ## Loop over entire dataset\r\n for article_id in range(len(data['data'])):\r\n paragraphs = data['data'][article_id]['paragraphs']\r\n ## Loop over all the paragraphs\r\n for paragraph in paragraphs:\r\n context = paragraph['context']\r\n qas = paragraph['qas']\r\n ## Loop over Questions and Answers\r\n for qa in qas:\r\n append_flag=False\r\n question = qa['question']\r\n question_id = qa['id']\r\n ## Select 1st answer if answers are available \r\n if qa['answers']:\r\n answer = qa['answers'][0]\r\n append_flag = True\r\n ## Append contexts and questions in a list and answers in a list as dictionary\r\n contexts.append(context)\r\n questions.append(question)\r\n question_ids.append(question_id)\r\n answers.append(answer) \r\n \r\n return contexts, questions, question_ids, answers\r\n\r\ntrain_contexts, train_questions,_, train_answers = read_squad('squad/train-v2.0.json')\r\n\r\n\r\n## Function to update answer_start and answer_end \r\ndef add_end_idx(answers, contexts):\r\n '''\r\n Description: This function is to find out character position at which the answer ends in the passage. \r\n Also corrects answer start and end position if the SQuAD answers are off by one or two characters\r\n Input: List of all answers, List of all contexts\r\n Output: Updated list with answer end position\r\n '''\r\n for answer, context in zip(answers, contexts):\r\n # Your code here\r\n if answer['answer_start'] is None:\r\n answer['answer_end'] = None\r\n else:\r\n answer_text = answer['text']\r\n answer_start = answer['answer_start']\r\n answer_end = len(answer_text) + answer_start\r\n\r\n #Sometimes answers are off by a character or two\r\n if context[answer_start:answer_end] == answer['text']:\r\n answer['answer_end'] = answer_end\r\n # If the answer text is off by 1 character\r\n elif context[answer_start-1:answer_end-1] == answer_text:\r\n answer['answer_start'] = answer_start - 1\r\n answer['answer_end'] = answer_end - 1 \r\n # If the answer text is off by 2 characters\r\n elif context[answer_start-2:answer_end-2] == answer_text:\r\n answer['answer_start'] = answer_start - 2\r\n answer['answer_end'] = answer_end - 2 \r\n\r\nadd_end_idx(train_answers, train_contexts)\r\n\r\n## Tokenize training data\r\ntokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')\r\ntrain_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True,\r\n return_offsets_mapping=True,\r\n return_overflowing_tokens=True)\r\n\r\n## Find out start_position and end_position in encoded dataset\r\ndef add_token_positions(encodings, answers):\r\n start_positions = []\r\n end_positions = []\r\n\r\n for idx in range(len(answers)):\r\n start_positions.append(encodings.char_to_token(idx, answers[idx]['answer_start']))\r\n if answers[idx]['answer_end'] is None:\r\n end_positions.append(encodings.char_to_token(idx, answers[idx]['answer_end']))\r\n else:\r\n end_positions.append(encodings.char_to_token(idx, answers[idx]['answer_end'] - 1))\r\n \r\n #if None, the answer passage has been truncated due to words > 512 so setting last position as 511\r\n if start_positions[-1] is None:\r\n start_positions[-1] = tokenizer.model_max_length-1\r\n if end_positions[-1] is None:\r\n end_positions[-1] = tokenizer.model_max_length-1\r\n \r\n encodings.update({'start_positions': start_positions, 'end_positions': end_positions})\r\n\r\nadd_token_positions(train_encodings, train_answers)\r\n\r\n## Validate answers based on start_position and end_position with actual answer for some random index\r\nidx=8\r\nprint(f'Actual context: {train_contexts[idx]}')\r\nprint(f'Actual question: {train_questions[idx]}')\r\nprint(f\"Actual answer: {train_answers[idx]['text']}\")\r\n\r\nstart_position=train_encodings.char_to_token(idx,train_answers[idx]['answer_start'])\r\nend_position =train_encodings.char_to_token(idx,train_answers[idx]['answer_end'])\r\n\r\n## ******This shows how start_position and end_position derived by using char_to_token() function is not correct******\r\nprint(f\"Answer after tokenization: {tokenizer.convert_ids_to_tokens(train_encodings['input_ids'][idx][start_position:end_position])}\")\r\n```",
"Completely agree with @PremalMatalia. \r\nThe problem is with -\r\nstart_positions.append(encodings.char_to_token(i, answers[i]['answer_start']))\r\n\r\nWe are getting None where as we should have got start token position. \r\nchar_to_token is not able to convert from string position to token position.",
"Thanks @PremalMatalia - I think I can reproduce.\r\n\r\nThe PR attached below should fix the problem. Can you check it again with the proposed fix?",
"Thanks @patrickvonplaten for quick action.\r\nIf I understand correctly, fix has been merged to original char_to_token() function? If yes, we can directly use the same function without any changes in code from myside. Is that correct?",
"No the `char_to_token()` function was always correct (It's actually a rust function from tokenizers that is used with python bindings). The function was simply used incorrectly, so I updated the docs.",
"> Hello Patrick,\r\n> Please find entire code starting from SQuAD 2.0 training data download to encoding to adding start and end position as below:\r\n> \r\n> ```python\r\n> !pip install wget\r\n> !pip install transformers==4.0.1\r\n> \r\n> import wget\r\n> import json\r\n> from pathlib import Path\r\n> import os\r\n> import json\r\n> from transformers import DistilBertTokenizerFast,TFDistilBertForQuestionAnswering\r\n> import tensorflow as tf\r\n> \r\n> # Import training data\r\n> !mkdir squad\r\n> train_source = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json'\r\n> train_dest = 'squad/train-v2.0.json'\r\n> wget.download(train_source,train_dest)\r\n> \r\n> ## Function to extract context, questions and answers\r\n> def read_squad(path,dataset='train'):\r\n> \r\n> contexts = []\r\n> questions = []\r\n> question_ids = []\r\n> answers = []\r\n> blank_counter = 0\r\n> append_flag = False\r\n> \r\n> with open(path) as f:\r\n> data = json.load(f)\r\n> \r\n> ## Loop over entire dataset\r\n> for article_id in range(len(data['data'])):\r\n> paragraphs = data['data'][article_id]['paragraphs']\r\n> ## Loop over all the paragraphs\r\n> for paragraph in paragraphs:\r\n> context = paragraph['context']\r\n> qas = paragraph['qas']\r\n> ## Loop over Questions and Answers\r\n> for qa in qas:\r\n> append_flag=False\r\n> question = qa['question']\r\n> question_id = qa['id']\r\n> ## Select 1st answer if answers are available \r\n> if qa['answers']:\r\n> answer = qa['answers'][0]\r\n> append_flag = True\r\n> ## Append contexts and questions in a list and answers in a list as dictionary\r\n> contexts.append(context)\r\n> questions.append(question)\r\n> question_ids.append(question_id)\r\n> answers.append(answer) \r\n> \r\n> return contexts, questions, question_ids, answers\r\n> \r\n> train_contexts, train_questions,_, train_answers = read_squad('squad/train-v2.0.json')\r\n> \r\n> \r\n> ## Function to update answer_start and answer_end \r\n> def add_end_idx(answers, contexts):\r\n> '''\r\n> Description: This function is to find out character position at which the answer ends in the passage. \r\n> Also corrects answer start and end position if the SQuAD answers are off by one or two characters\r\n> Input: List of all answers, List of all contexts\r\n> Output: Updated list with answer end position\r\n> '''\r\n> for answer, context in zip(answers, contexts):\r\n> # Your code here\r\n> if answer['answer_start'] is None:\r\n> answer['answer_end'] = None\r\n> else:\r\n> answer_text = answer['text']\r\n> answer_start = answer['answer_start']\r\n> answer_end = len(answer_text) + answer_start\r\n> \r\n> #Sometimes answers are off by a character or two\r\n> if context[answer_start:answer_end] == answer['text']:\r\n> answer['answer_end'] = answer_end\r\n> # If the answer text is off by 1 character\r\n> elif context[answer_start-1:answer_end-1] == answer_text:\r\n> answer['answer_start'] = answer_start - 1\r\n> answer['answer_end'] = answer_end - 1 \r\n> # If the answer text is off by 2 characters\r\n> elif context[answer_start-2:answer_end-2] == answer_text:\r\n> answer['answer_start'] = answer_start - 2\r\n> answer['answer_end'] = answer_end - 2 \r\n> \r\n> add_end_idx(train_answers, train_contexts)\r\n> \r\n> ## Tokenize training data\r\n> tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')\r\n> train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True,\r\n> return_offsets_mapping=True,\r\n> return_overflowing_tokens=True)\r\n> \r\n> ## Find out start_position and end_position in encoded dataset\r\n> def add_token_positions(encodings, answers):\r\n> start_positions = []\r\n> end_positions = []\r\n> \r\n> for idx in range(len(answers)):\r\n> start_positions.append(encodings.char_to_token(idx, answers[idx]['answer_start']))\r\n> if answers[idx]['answer_end'] is None:\r\n> end_positions.append(encodings.char_to_token(idx, answers[idx]['answer_end']))\r\n> else:\r\n> end_positions.append(encodings.char_to_token(idx, answers[idx]['answer_end'] - 1))\r\n> \r\n> #if None, the answer passage has been truncated due to words > 512 so setting last position as 511\r\n> if start_positions[-1] is None:\r\n> start_positions[-1] = tokenizer.model_max_length-1\r\n> if end_positions[-1] is None:\r\n> end_positions[-1] = tokenizer.model_max_length-1\r\n> \r\n> encodings.update({'start_positions': start_positions, 'end_positions': end_positions})\r\n> \r\n> add_token_positions(train_encodings, train_answers)\r\n> \r\n> ## Validate answers based on start_position and end_position with actual answer for some random index\r\n> idx=8\r\n> print(f'Actual context: {train_contexts[idx]}')\r\n> print(f'Actual question: {train_questions[idx]}')\r\n> print(f\"Actual answer: {train_answers[idx]['text']}\")\r\n> \r\n> start_position=train_encodings.char_to_token(idx,train_answers[idx]['answer_start'])\r\n> end_position =train_encodings.char_to_token(idx,train_answers[idx]['answer_end'])\r\n> \r\n> ## ******This shows how start_position and end_position derived by using char_to_token() function is not correct******\r\n> print(f\"Answer after tokenization: {tokenizer.convert_ids_to_tokens(train_encodings['input_ids'][idx][start_position:end_position])}\")\r\n> ```\r\n\r\nSince Pytorch removed SAVE_STATE_WARNING now it will pop up an error if install transformers==4.0.1. I use transformers>=4.5 and it works"
] | 1,609 | 1,619 | 1,609 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Google Colab
- Python version: 3.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.4.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: NA
### Who can help: **tokenizers: @mfuntowicz**
## Information
Model I am using DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') to tokenize Squad 2.0 train and validate dataset.
The problem arises when using below code snippet to add_token_positions (start and end position) as below from https://huggingface.co/transformers/custom_datasets.html:
_def add_token_positions(encodings, answers):
start_positions = []
end_positions = []
for i in range(len(answers)):
start_positions.append(**encodings.char_to_token(i, answers[i]['answer_start'])**)
end_positions.append(**encodings.char_to_token(i, answers[i]['answer_end'] - 1**))
# if None, the answer passage has been truncated
if start_positions[-1] is None:
start_positions[-1] = tokenizer.model_max_length
if end_positions[-1] is None:
end_positions[-1] = tokenizer.model_max_length
encodings.update({'start_positions': start_positions, 'end_positions': end_positions})
add_token_positions(train_encodings, train_answers)
add_token_positions(val_encodings, val_answers)_
The tasks I am working on is:
*Training model on SQUaD 2.0 using code given on https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0
## To reproduce
Steps to reproduce the behavior:
1. Follow the steps given on https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0 and then verify start and end position outcome using below code snippet in Expected behavior
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior:
- Start and End position are being defined using above code snippet which will be provided as training/validation data to model but end position is not derived as correct value due to some issue with char_to_token() function which is used to find out end position.
- Please find below snippet for verification that answer using start and end position after tokenization is not matching with actual answer.
- So the training data which is being fed to model after tokenization is incorrect
idx=8
print(f'Actual context: {train_contexts[idx]}')
print(f'Actual question: {train_questions[idx]}')
print(f"Actual answer: {train_answers[idx]['text']}")
start_position=train_encodings.char_to_token(idx,train_answers[idx]['answer_start'])
end_position =train_encodings.char_to_token(idx,train_answers[idx]['answer_end'])
print(f"Answer after tokenization: {tokenizer.convert_ids_to_tokens(train_encodings['input_ids'][idx][start_position:end_position])}")
OUTPUT:
**Actual context:** Beyoncé Giselle Knowles-Carter (/biːˈjɒnseɪ/ bee-YON-say) (born September 4, 1981) is an American singer, songwriter, record producer and actress. Born and raised in Houston, Texas, she performed in various singing and dancing competitions as a child, and rose to fame in the late 1990s as lead singer of R&B girl-group Destiny's Child. Managed by her father, Mathew Knowles, the group became one of the world's best-selling girl groups of all time. Their hiatus saw the release of Beyoncé's debut album, Dangerously in Love (2003), which established her as a solo artist worldwide, earned five Grammy Awards and featured the Billboard Hot 100 number-one singles "Crazy in Love" and "Baby Boy".
**Actual question:** When did Beyoncé rise to fame?
**Actual answer:** late 1990s
**Answer after tokenization:** ['late', '1990s', 'as', 'lead', 'singer', 'of', 'r', '&', 'b', 'girl', '-', 'group', 'destiny', "'", 's', 'child', '.', 'managed', 'by', 'her', 'father', ',', 'mathew', 'knowles', ',', 'the', 'group', 'became', 'one', 'of', 'the', 'world', "'", 's', 'best', '-', 'selling', 'girl', 'groups', 'of', 'all', 'time', '.', 'their', 'hiatus', 'saw', 'the', 'release', 'of', 'beyonce', "'", 's', 'debut', 'album', ',', 'dangerously', 'in', 'love', '(', '2003', ')', ',', 'which', 'established', 'her', 'as', 'a', 'solo', 'artist', 'worldwide', ',', 'earned', 'five', 'grammy', 'awards', 'and', 'featured', 'the', 'billboard', 'hot', '100', 'number', '-', 'one', 'singles', '"', 'crazy', 'in', 'love', '"', 'and', '"', 'baby', 'boy', '"', '.', '[SEP]', 'when', 'did', 'beyonce', 'rise', 'to', 'fame', '?', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]'] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9326/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9326/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9325 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9325/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9325/comments | https://api.github.com/repos/huggingface/transformers/issues/9325/events | https://github.com/huggingface/transformers/pull/9325 | 775,219,521 | MDExOlB1bGxSZXF1ZXN0NTQ1OTQzNzUz | 9,325 | Add FAVOR+ / Performer attention | {
"login": "norabelrose",
"id": 39116809,
"node_id": "MDQ6VXNlcjM5MTE2ODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/norabelrose",
"html_url": "https://github.com/norabelrose",
"followers_url": "https://api.github.com/users/norabelrose/followers",
"following_url": "https://api.github.com/users/norabelrose/following{/other_user}",
"gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}",
"starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions",
"organizations_url": "https://api.github.com/users/norabelrose/orgs",
"repos_url": "https://api.github.com/users/norabelrose/repos",
"events_url": "https://api.github.com/users/norabelrose/events{/privacy}",
"received_events_url": "https://api.github.com/users/norabelrose/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@norabelrose Great job! Thanks for letting me know about this :)",
"@norabelrose - you've done an amazing job here! Having Performer in PyTorch is a huge contribution.\r\n\r\nI completely understand that you've already invested a lot of time in making this PR and we're happy to complete your PR!",
"also pinging @TevenLeScao here. \r\n\r\nIn terms of next steps, I think we should do the following (We're happy to take over those tasks @norabelrose :-)) : \r\n\r\n- Check that the added PyTorch and Tensorflow Performer self-attention yields identical results as the flax version: Compare Bert model to this Performer BertFlax model: https://github.com/huggingface/transformers/pull/8358\r\n- Fine-tune some pre-trained weighs to be compatible with Performer attention (ideally Bert or DistilBert)",
"Now, the painful discussion on how to integrate Performer.\r\n\r\n**Context**:\r\n\r\nPerformer attention is special in the sense that it is fully compatible with a pre-existing model architecture and does not require any weights to be different from normal attention. This means that a `bert-base-cased` model does not require any changes in its architecture to use Performer's attention. The only change will be how the weights are used to compute the self-attention layer output. \r\nOne can easily see this on the original code base: https://github.com/google-research/google-research/tree/master/performer/fast_attention/jax#jax-variant-of-favor where only the attention function has to be changed to `make_fast_softmax_attention` with no changes required to the parameters dict.\r\n\r\nThis is a huge argument for simply making Performer's self-attention available to all models by changing their respective `modeling_....py` file. \r\n\r\n**My opinion on the integration into Transformers**\r\n\r\nNevertheless, I'm in favor of implementing Performer **only** in a stand-alone file (at least at first), a.k.a. `PerformerModel` or maye in this case `PerformerBertModel`, which is **different** from the current version of the PR. I've the following arguments:\r\n\r\n- It's the standard in Transformers to add a new model for a new attention function. We've done the same for Longformer even though the Longformer attention could have been added for each model. It's easier for users to navigate between models, *e.g.* Performer will have its own model page vs. some docstring in utils.\r\n- It's actually not that easy to convert an existing BERT-like model to a Performer-BERT model. E.g. le'ts say we integrate Performer attention into `modeling_bert.py`. If one wants to convert the model to performer attention, the user would have to manually copy the positional embeddings (which are limited to 512 in Bert) to be as long as 64K+. We could write a convert function for this, but this convert function would probably be different for each model.\r\n- Performer cannot support all the functionalities of Bert. This means if we integrate Performer into Bert, a ```BertModel.from_pretrained(...., is_performer=True)``` model will not have all the functionalities that a Bert model will have, such as `output_attentions=True`, `is_decoder=True`, `is_encoder_decoder=True` -> Performer never creates the complete attention_mask so the `ouput_attentions=True` functionality gets lost, Performer does not support Encoder-Decoder out-of-the-box without requiring more if-else clauses. This will necessarily lead to many issues and some `if self.is_performer` code in BERT which I don't want to do.\r\n- Performer is still a very novel feature that is still somewhat experimental IMO. If Performer really takes off, we can always integrate the Performer attention more deeply into the library as proposed in this PR. The `modeling_bert.py` code is now used by 100K+ people, so I want to be very very careful with changes to this code especially. It's just a safer option to have a standalone Performer model in the beginning IMO. \r\n- I don't really think that users are interested to be able to use Performer Attention for all models. I think the models of interest will be `Bert` (`DistilBERT`), `GPT2`, `T5`, and `Bart`. Some models will never be used with Performer, such as Reformer, XLNet, Transfo-XL, Longformer, ConvBert, Routing Transformer, LED. \r\n\r\n\r\nI'd be thrilled you hear your opinions here @norabelrose, @sgugger, @LysandreJik, and @thomwolf",
"Thanks for this amazing contribution! \r\n\r\nI think long document classification and summarisation tasks are an important use case for this so having performer attention for some representative models in those scenarios would be fantastic. Personally I am looking forward to using performer attention with Roberta sequence and token classification models, but I understand not every model can get performer support right away so it would be great to also have a few examples on how we could add performer attention to other models ourselves, if possible.\r\n\r\nReally excited about this, thanks so much!",
"@patrickvonplaten Thank you for the thoughtful feedback. I understand your concerns about building Performer attention right into existing models like BERT. On the other hand, as @onclue mentioned, having only one model that supports Performer attention would really restrict the usefulness of the feature.\r\n\r\nIt seems like there should be some \"compromise\" option here. What if we just added simple Performer-supporting subclasses to a few different models, something like this:\r\n```\r\n@supports_performer_attention\r\nclass PerformerBertConfig(BertConfig):\r\n\tpass\r\n```\r\n\r\nand have `PerformerBertModel` be a subclass of `BertModel` that uses the following module for its attention mechanism:\r\n\r\n```\r\nclass PerformerBertAttention(BertAttention):\r\n\t@init_performer_attention_bertlike(BertSelfAttention)\r\n\tdef __init__(self, config):\r\n\t\tsuper().__init__(self, config)\r\n```\r\n\r\nAnd then the same process could be done for RoBERTa, DistilBERT, GPT-2, etc. I recognize that trying to add Performer support to \"all\" models is sort of silly and wouldn't work, but there are quite a few models that would benefit from it. It would also be nice if `PerformerAttention` and `PerformerAttentionConfig` remained public APIs, as they are in this PR, so that users could just take the attention mechanism and drop it into whatever custom model they want.",
"Thanks for your answer @norabelrose! I understand your point. The goal should definitely to support all the \"highly\" used models: DistilBERT, BERT, RoBERTa, T5, Bart\r\n\r\nI think we need to dive a bit deeper into the PR and play around with Performer to see how to best integrate the model, but in general the philosophy of the library has always been:\r\n\r\n- Model files should be kept as independent from each other as possible\r\n- Readability is more important than the drawback of duplicated code -> so we don't mind it too much if we duplicate Performer code across 5 or so model files\r\n- We try to minimize \"magic\" internal functionalities that are hard to understand when first seeing the code to a minimum meaning that we're not a huge fan of function decorators for important functionalities in general. \r\n\r\nBut we'll have to dive deeper into the PR to get a better understanding here - sorry for being so slow here!\r\n\r\nAlso, is there already a model that has successfully been fine-tuned to \"long\" inputs?",
"@norabelrose Where do the keys, queries & values come from when calling the Performer attention? \r\n\r\ndef call(self, query, key, value, mask=None, head_mask=None, output_attentions=False):\r\n\r\nThe call method of the Bert attention its replacing (i guess) takes the hidden states instead and then calculates q,k,v; \r\n\r\nHow does it work from a coding perspective that it can have different call inputs? \r\n& Should i just feed in the hidden states 3x for q, k, v when using performer attention? \r\n\r\n\r\n",
"To get the TFPerformerAttention working I had to had to apply three fixes:\r\n- Swap all shape calls for shape_list\r\n- Add mask = tf.reshape(mask, shape=(shape_list(k_prime)[0], shape_list(k_prime)[2])) in compute_attention_with_projected_queries_and_keys due to problems with the extended attn mask\r\n- Remove the reshapes in _finalize_attention_output, as we need the shape to stay in [..., num_heads, dim_per_head]-like shape\r\n\r\nperhaps it helps sb else // @norabelrose can correct me if im doing sth wrong",
"Hello,\r\n\r\nAmazing work @norabelrose! I have been trying your performer implementation. I have copied your attention implementation\r\n```PerformerAttention``` and have replaced that attention with the normal self-attention in Mobilebert. I have tracked some metrics with respect to other implementations. I have seen that the memory consumption on 512 tokens long it consume the same memory that the normal self attention. And it is also the same fast.\r\n\r\nI have logged the metrics with Wandb:\r\nhttps://wandb.ai/gaceladri/new_berts/reports/Memory-and-speed-comparison--Vmlldzo0NDA4MTI \r\n\r\nDoes that makes sense? I have seen in Long Range Arena https://arxiv.org/abs/2011.04006 that it is 1.2x faster with 1k tokens but I have not tried with that long. The point where I am confused is with the memory consumption. At shorter values, the attention mechanism, being linear with respect to sequence length, not should be consuming less memory?",
"@norabelrose I tried the `TFPerformerAttention` with some minor adaptions and it works fine during training. I must say it is a very nice implementation 👍 \r\nHowever, when I train my model, I save the weights at each checkpoint, and I quantize it into `model.pb` as well as into a `TF-Lite` model. When loading all the models again (from saved weights, quantized and tensorflow lite), the output of the model with loaded weights differ from the rest. Any idea why this is the case? ",
"@gcuder Would you mind sharing your code? I have been getting speed & memory improvements but the TFPerformer doesn't really converge... \r\n\r\n",
"What are the plans for this MR @patrickvonplaten ?",
"Based on @norabelrose great work, I set up a fork with the performer as a separate model at [https://github.com/Muennighoff/transformers](https://github.com/Muennighoff/transformers). I removed the decorators but kept the separate performer attention config. For now I only added Distilbert, the question being whether we should add new performer_xyz folders for each model or fit them in one performer folder.\r\nIt can just be used as \r\n`from transformers import DistilBertPerformerModel, DistilBertPerformerConfig`\r\n`configuration = DistilBertPerformerConfig()`\r\n`model = DistilBertPerformerModel(configuration)`\r\n\r\nHere's an example notebook comparing the distilbert perf/trans performance on seq. classification:\r\n[https://colab.research.google.com/drive/1o8ioYUIvvIol7PXrDguftQtHyCpyDSJL#scrollTo=JGxH15LIN66M](https://colab.research.google.com/drive/1o8ioYUIvvIol7PXrDguftQtHyCpyDSJL#scrollTo=JGxH15LIN66M)\r\n\r\nperhaps it can help us bring this forward? @marrrcin @patrickvonplaten \r\n\r\n",
"Hey guys, sorry I don't really have the bandwidth to take a closer look here at the moment, but it's definitely on my ToDo List! One thing that would be extremely useful would be to have a script that shows how a pretrained model such as `distilbert` can be extended to a its performer version and subsequently be fine-tuned for long-range sequence modeling. @TevenLeScao ran some initial experiments and didn't find the fine-tuning to be that easy...",
"Can it be made compatible with T5? As far as I know Performer and relative attention together is an open research question.",
"How about this? If I understand correctly Performer calculates `Q' * (K' * V)` instead of `softmax(Q * K) * V` (Q: queries, K: keys, V: values, *: matmul). T5 calculates `softmax(Q * K + B) * V` (B: relative positional biases). A new kind of model that initializes most of its weights from T5 could calculate `(softmax(Q * K) + B') * V = (Q' * K' + B') * V = Q' * (K' * V) + B' * V`. This way at least the first term can be calculated with FAVOR+ and the second term is much smaller/faster to calculate even if its complexity is quadratic. `B'` could be initialized in a way that on average the activations in the training set remain unchanged. We loose backward compatibility so more finetuning is necessary.",
"> How about this? If I understand correctly Performer calculates `Q' * (K' * V)` instead of `softmax(Q * K) * V` (Q: queries, K: keys, V: values, *: matmul). T5 calculates `softmax(Q * K + B) * V` (B: relative positional biases). I could calculate `(Q' * K' + B') * V` but then I would not gain much from using FAVOR+. But if I calculate `Q' * (K' * V) + B' * V` then at least the first term can be calculated with FAVOR+ and the second term is much smaller/faster to calculate even if it's quadratic complexity. `B'` can be initialized with `B` and finetuned.\r\n\r\nthat's an interesting idea! I will try to add T5 to the repo I set up so we can experiment with that; Currently for some reason only distilbert converges, while bert doesn't (https://colab.research.google.com/drive/1o8ioYUIvvIol7PXrDguftQtHyCpyDSJL#scrollTo=9pom5i196Bwg), so i need to figure that out first;; if anybody got bert to work let me know!",
"I refactored a lot of code & now works like a charm for me \r\nHere's a large notebook with TF & Torch comparisons for Perf/Trans on BERT, DBERT, T5 on short sequences (SST2 Dataset): https://colab.research.google.com/drive/1A9reiUZbA7DELuJ8keTo73sIQ4dJJVoT?usp=sharing\r\n\r\nTo add a new model all one needs to do is: \r\n- Copy over the config, tf/torch modeling file of the model you want favor attention for to its own folder [here](https://github.com/Muennighoff/transformers/tree/master/src/transformers/models/performer)\r\n- Add a modelXPerformer Config (should be about the same as the current modelXperformer configs)\r\n- Init favor attention directly in the self attention module (i.e. one level lower than in the implementation of this PR) - this is preferrable as it scales better to models with different attention modules / linear layers\r\n- feed q,k,v,mask after their linear layers through the favor attention to get back the final attention output\r\n- Remove all the softmax business😃 \r\n- Last thing to adapt is the (extended) attention mask -- We want the shape to be (bs, 1, seq_len, 1) instead of (bs, 1, 1, seq_len) & we don't want to fill it with the -infs, i.e. just leave it as 1's & 0's, as we multiply it not add it\r\n- If you want to import it with `from transformers import ModelXPerformer` like the current performer models, rename the models & add them to the `__init__.py` in the performer folder & transformer parent folder\r\n\r\nReg. T5: \r\n- Based on @marton-avrios proposal, I added T5 - it got a bit more complex due to the attn mask so I compute:\r\n\t- `Q' @ ((K' * M) @ V) + (B * M) @ V` & it converges🎉\r\n- However, B is a matrix of (bs, n_heads, L, L) where L is the seq len so it scales quadratically with seq len, the exact problem performers try to solve ;_; \r\n- Removing the Rel Pos Encoding entirely surprisingly has about the same performance and is much faster (i got a 30% speedup for 1000 seq_len) - Still need to test it for EncDec model; Another option is just using bert's abs. pos. embeddings.; The pretrained model shouldnt be affected much \r\n\r\nDecoders:\r\n- Only worked with T5Encoder so far; The code for doing causal perf. attention should be there, so adding EncDec / GPT-2 like models should be pretty simple; If somebody wants to try it let me know!; We just need to be careful with the attn mask when we implement it\r\n\r\nA couple pointers if you dont get the desired performance:\r\n- The approximation error propagates through the layers, i.e. the more layers the worse it may get (A 6-layer bert performer gives me about as good a performance as a 12-layer one)\r\n- Try increasing the random features & the feature drawing interval - The more random features the better the softmax approximation, though it also gets more expensive\r\n- Make sure the masking is correct!",
"I'm sorry I haven't responded to mentions on this PR recently— I've been quite busy with an unrelated project.\r\n\r\nThank you @Muennighoff for all your hard work extending/refining the PR! I just merged your changes.",
"wow, great work @Muennighoff !\r\n\r\nRegarding T5:\r\n - despite still being quadratic complexity have you measured any speed/memory improvements compared to vanilla T5? Or significantly worse (better?) performance? In vanilla T5 there are 2 computations of quadratic complexity: `QK` and `B` but the calculation of `QK` plays a much bigger role in the overall speed of T5. Also it is calculated (and stored) in every layer while `B` is only calculated (and stored) in the first layer.\r\n - when you mention 30% speedup and same performance by removing relative positional attention is it a PerformerT5 compared to a PerformerT5 without it or a vanilla T5 compared to a vanilla T5 without it? Because if it is a PerformerT5 comparison then I think it means that it cannot learn meaningful weights for `B` anyway.",
"> wow, great work @Muennighoff !\r\n> \r\n> Regarding T5:\r\n> \r\n> * despite still being quadratic complexity have you measured any speed/memory improvements compared to vanilla T5? Or significantly worse (better?) performance? In vanilla T5 there are 2 computations of quadratic complexity: `QK` and `B` but the calculation of `QK` plays a much bigger role in the overall speed of T5. Also it is calculated (and stored) in every layer while `B` is only calculated (and stored) in the first layer.\r\n> * when you mention 30% speedup and same performance by removing relative positional attention is it a PerformerT5 compared to a PerformerT5 without it or a vanilla T5 compared to a vanilla T5 without it? Because if it is a PerformerT5 comparison then I think it means that it cannot learn meaningful weights for `B` anyway.\r\n\r\nCheck out the T5 Tensorflow experiments here: https://colab.research.google.com/drive/1A9reiUZbA7DELuJ8keTo73sIQ4dJJVoT?usp=sharing\r\nFor each configuration (performer/transformer /// raw/pretrained) i ran it w/ & w/o pos bias, but only on short seq task of sst-2. \r\n\r\nFor the 1000 seq len task i mentioned, i ran only performer encoders (w/ & w/o pos bias) on byte-text level classification from the LRA paper & they had the same performance within +- 1% accuracy.\r\n\r\ni'm not yet sure what to make of it; I think we could confirm that they are of no use after training a full t5 enc-dec model in performer mode & benchmarking that",
"I realized a mistake in my formulation which would explain why PerformerT5 could not make use of `B'`.\r\n\r\nVanilla T5 calculates this: `inverse(D) * exp(Q * t(K) + B) * V`\r\n- ...which is equivalent to `softmax(Q * t(K) + B) * V`\r\n- ...where `D = diag(exp(Q * t(K) + B) * 1L)`, `t()` is the transpose function and `1L` is the all 1 vector of length L.\r\n\r\nI propose to calculate this: `inverse(D) * (exp(Q * t(K)) + B') * V`\r\n- ...which is equivalent to `inverse(D) * exp(Q * t(K)) * V + inverse(D) * B' * V`\r\n- ...where `D = diag(exp(Q * t(K)) * 1L + B' * 1L)`\r\n- ...and when finetuning `B'` should not be initialized from `B` but randomly instead.\r\n\r\nI propose to approximate it with: `inverse(D') * Q' * t(K') * V + inverse(D') * B' * V`\r\n- ...where `D' = diag(Q' * t(K') * 1L + B' * 1L)`.\r\n\r\nMy previous (incorrect) approximation was: `inverse(D') * Q' * t(K') * V + B' * V`\r\n- ...which approximates `inverse(D) * exp(Q * t(K)) * V + B' * V`\r\n- ...and NOT `inverse(D) * exp(Q * t(K)) * V + inverse(D) * B' * V`.",
"> * inverse\r\n\r\n\r\n\r\n> I realized a mistake in my formulation which would explain why PerformerT5 could not make use of `B'`.\r\n> \r\n> Vanilla T5 calculates this: `inverse(D) * exp(Q * t(K) + B) * V`\r\n> \r\n> * ...which is equivalent to `softmax(Q * t(K) + B) * V`\r\n> * ...where `D = diag(exp(Q * t(K) + B) * 1L)`, `t()` is the transpose function and `1L` is the all 1 vector of length L.\r\n> \r\n> I propose to calculate this: `inverse(D) * (exp(Q * t(K)) + B') * V`\r\n> \r\n> * ...which is equivalent to `inverse(D) * exp(Q * t(K)) * V + inverse(D) * B' * V`\r\n> * ...where `D = diag(exp(Q * t(K)) * 1L + B' * 1L)`\r\n> * ...and when finetuning `B'` should not be initialized from `B` but randomly instead.\r\n> \r\n> I propose to approximate it with: `inverse(D') * Q' * t(K') * V + inverse(D') * B' * V`\r\n> \r\n> * ...where `D' = diag(Q' * t(K') * 1L + B' * 1L)`.\r\n> \r\n> My previous (incorrect) approximation was: `inverse(D') * Q' * t(K') * V + B' * V`\r\n> \r\n> * ...which approximates `inverse(D) * exp(Q * t(K)) * V + B' * V`\r\n> * ...and NOT `inverse(D) * exp(Q * t(K)) * V + inverse(D) * B' * V`.\r\n\r\nYeah you're right the previous approximation wasn't correct; I also forgot to include it in D when doing the code experiments;\r\nWe could try the approximation you propose. \r\n\r\nAnother angle could be:\r\nSince \r\n`exp(Q @ t(K) + B) = exp(Q @ t(K)) * exp(B)`\r\nand \r\n`exp(Q @ t(K)) ~ ϕ(Q) @ t(ϕ(K))` \r\ni think we can do\r\n`exp(Q @ t(K) + B) ~ ϕ(Q) @ t(ϕ(K)) * exp(B) `\r\nbut rearranging \r\n`(ϕ(Q) @ t(ϕ(K)) * exp(B)) @ V `\r\nto avoid calculating Q @ K first is a pain ",
"Is this close? My teammates and I want to use performers in T5",
"I was just looking through the code and this is stuff of legends! Great work.\r\n\r\nIn the T5 implementation, I noticed that performer attention forward method is called with position bias, yet it is not currently a valid parameter. Is that residual from the conversation about the above position bias conversations?\r\n\r\nEDIT: Ignore the above, I was looking at the wrong implementation of `PerformerAttention`",
"> I was just looking through the code and this is stuff of legends! Great work.\r\n> \r\n> In the T5 implementation, I noticed that performer attention forward method is called with position bias, yet it is not currently a valid parameter. Is that residual from the conversation about the above position bias conversations?\r\n> \r\n> EDIT: Ignore the above, I was looking at the wrong implementation of `PerformerAttention`\r\n\r\nI removed the position bias temporarily, as not using it at all worked best. I havn't tried @marton-avrios most recent idea though, so perhaps somebody might want to try it and report back. \r\n\r\nIf you only need an Encoder T5, you should be able to work with what's there -- For Encoder-Decoder, The causal decoder is currently still prohibitively expensive due to the for loop & cumsum operation (@mymusise and me are working on it [here](https://github.com/mymusise/gpt2-quickly/issues/5)). Let us know if you get the decoder to perform! ",
"I'm thinking about the position bias, and it doesn't seem like there's a good way to include it. What's been mentioned above seems correct, that the mathematical starting point is \r\n\r\n(Q'K'^T * B')V, where B' = e^B (elementwise)\r\n\r\nBut, this can't be computed without computing Q'K'^T first, defeating the purpose.\r\n\r\nThe alternative is to add some position encoding into each of Q' and K' (a la 'Attention Is All You Need'). I think this is the only / best way to do position bias in this context. That said, it would be getting kind of wonky / outside the spirit of the performers paper, so I'm not sure position bias should even be allowed in this PR.\r\n\r\nDo you all agree with this?",
"Hi, I have been trying to run finetuning with `T5PerformerForConditionalGeneration` using this pull request branch, and I have got few minor issues or questions I wanted to ask about.\r\n\r\n1. Merge conflict comments were still left under `/src/transformers/__init__.py`, which is not a serious issue.\r\n2. After getting the attention output from `PerformerAttention`, I had to add `unshape` call it to concat the head attentions and multiply to `W0` in `forward()` of `T5Attention`. I found original call to unshape was commented out since it included matmul of `V`.\r\n3. Both in encoder and decoder, I was getting matrix multiplication exception by wrong dimension on the line when multiplying(in `PerformerAttention`) `mask` to `k_prime`. Was this the reason why @norabelrose mentioned T5 Decoders is not fully working yet? I am trying to fix this attention mask issue for the decoder, but for encoder case, is transposing the attention mask the right way to fix?\r\n> Decoders:\r\n> \r\n> Only worked with T5Encoder so far; The code for doing causal perf. attention should be there, so adding EncDec / GPT-2 like models should be pretty simple; If somebody wants to try it let me know!; We just need to be careful with the attn mask when we implement it",
"> Hi, I have been trying to run finetuning with `T5PerformerForConditionalGeneration` using this pull request branch, and I have got few minor issues or questions I wanted to ask about.\r\n> \r\n> 1. Merge conflict comments were still left under `/src/transformers/__init__.py`, which is not a serious issue.\r\n> 2. After getting the attention output from `PerformerAttention`, I had to add `unshape` call it to concat the head attentions and multiply to `W0` in `forward()` of `T5Attention`. I found original call to unshape was commented out since it included matmul of `V`.\r\n> 3. Both in encoder and decoder, I was getting matrix multiplication exception by wrong dimension on the line when multiplying(in `PerformerAttention`) `mask` to `k_prime`. Was this the reason why @norabelrose mentioned T5 Decoders is not fully working yet? I am trying to fix this attention mask issue for the decoder, but for encoder case, is transposing the attention mask the right way to fix?\r\n> \r\n> > Decoders:\r\n> > Only worked with T5Encoder so far; The code for doing causal perf. attention should be there, so adding EncDec / GPT-2 like models should be pretty simple; If somebody wants to try it let me know!; We just need to be careful with the attn mask when we implement it\r\n\r\nI think 1,2 & 3 are all fixed here: https://github.com/Muennighoff/transformers ; \r\nThe masking for the decoder however may not yet work\r\n"
] | 1,609 | 1,621 | 1,617 | CONTRIBUTOR | null | ## What does this PR do?
Adds support for the Performer / FAVOR+ attention mechanism, as described in the paper "Rethinking Attention with Performers" by Choromanski et al., 2020. Fixes #7675.
## How is it implemented?
Since Performer attention can be an unbiased estimator of traditional softmax attention, and pretrained models can be finetuned to work with it, the general consensus in the discussion on #7675 was that it should not be implemented as a single separate Transformer model. Ideally, we want all or most models in the transformers library to be able to use Performer attention.
In view of this, I've implemented the feature by creating three new classes: `PerformerAttention`, `TFPerformerAttention`, and `PerformerAttentionConfig`. These are implemented in the files `modeling_performer_attention.py`, `modeling_tf_performer_attention.py`, and `configuration_performer_attention.py` respectively in `src/transformers`.
Models are marked as supporting Performer attention by adding the `@supports_performer_attention` class decorator to the corresponding config class. This decorator adds the `attention_type: str` and `performer_attention_config: Optional[Union[dict, PerformerAttentionConfig]]` attributes to the config class, and also adds some boilerplate code to the class's `to_dict()` method to make sure JSON serialization works properly. It also registers the class so that the user can get a full list of Performer attention-supporting models with the function `performer_supporting_models_and_configs()`.
This isn't quite enough to get Performer attention to work for a new model, though. Adding Performer support to a model is inherently a somewhat tedious process, but I've tried to make it less tedious by implementing a `@init_performer_attention()` function decorator which can be added to the `__init__` method on the immediate parent of an attention module within a model— this will initialize either the model's own softmax attention module, or a `PerformerAttention` module, depending on how `attention_type` is set. You can see how this is implemented in `performer_attention_utils.py`.
This is all that you need to do for some models, although others will need a bit of extra work due to idiosyncracies in their implementation. I've already added Performer support to the following models: DistilBERT, BERT, RoBERTa, ELECTRA, LayoutLM, and TAPAS (in both PyTorch and TensorFlow). My hope is that other contributors will add support to other models relatively quickly.
Unit tests can be found in `test_performer_attention.py`. They do an exhaustive grid search of the enum and boolean config options and make sure that none of the 4.6k+ combinations causes a crash or a shape mismatch, and also make sure that the PyTorch and TensorFlow implementations have the same output, within numerical error, under all configurations.
## Rough edges
While I added extensive docstrings to `PerformerAttention` and `PerformerAttentionConfig`, which can be used to generate documentation, I haven't actually made the documentation files themselves. That will have to be left to another contributor, or to my future self— although honestly I've put quite a lot of time into this PR and would like to get on to other projects, so I would really appreciate it if someone else did it.
`PerformerAttention` supports using a custom CUDA kernel from the `fast_transformers` library to implement causally masked attention, although I have never actually been able to test this functionality because I don't have root access to the GPU server I use and therefore can't install NVCC. I'm hoping a reviewer could do that— it's a relatively straightforward feature so if there are any bugs in it it should be pretty easy to fix.
Also, the current code throws some odd linter errors which I haven't been able to figure out how to resolve and which don't seem to be consequential. Something about the code in RoBERTA, LayoutLM, etc. that is marked as being copied from BERT not matching the BERT code exactly. If a reviewer could figure out how to silence that error that would be greatly appreciated.
## Who can review?
@patrickvonplaten commented on #7675 and seemed excited about the PR, so I think he would be a good reviewer for this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9325/reactions",
"total_count": 37,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 18,
"confused": 0,
"heart": 19,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9325/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9325",
"html_url": "https://github.com/huggingface/transformers/pull/9325",
"diff_url": "https://github.com/huggingface/transformers/pull/9325.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9325.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9324 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9324/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9324/comments | https://api.github.com/repos/huggingface/transformers/issues/9324/events | https://github.com/huggingface/transformers/issues/9324 | 775,186,171 | MDU6SXNzdWU3NzUxODYxNzE= | 9,324 | Music Transformers | {
"login": "asigalov61",
"id": 56325539,
"node_id": "MDQ6VXNlcjU2MzI1NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/56325539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asigalov61",
"html_url": "https://github.com/asigalov61",
"followers_url": "https://api.github.com/users/asigalov61/followers",
"following_url": "https://api.github.com/users/asigalov61/following{/other_user}",
"gists_url": "https://api.github.com/users/asigalov61/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asigalov61/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asigalov61/subscriptions",
"organizations_url": "https://api.github.com/users/asigalov61/orgs",
"repos_url": "https://api.github.com/users/asigalov61/repos",
"events_url": "https://api.github.com/users/asigalov61/events{/privacy}",
"received_events_url": "https://api.github.com/users/asigalov61/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @asigalov61,\r\n\r\nI think applying `Transformers` to Music is a super cool idea! Regarding the best model to use for music composition, IMO it depends strongly on:\r\n\r\n- What is the input to the model? Do you input tokens or float vectors?\r\n- How long is the input? *e.g.* how many float vectors or tokens? GPT2 is limited to 1024 tokens / float vectors -> is this too short? \r\n- For generation (composition), I think only our `autoregressive models` make sense: https://huggingface.co/transformers/model_summary.html#autoregressive-models so mostly GPT2. For \"classification\" it would mostly be BERT. If you need very long inputs, it would be interesting to check-out ReformerLM: https://huggingface.co/google/reformer-enwik8\r\n\r\nI bet people would be very interested in Transformers + Music. We've created a new examples folder structure for such projects, so feel free to open a PR to add a dir \"music_transformers\" here: https://github.com/huggingface/transformers/tree/master/examples/research_projects",
"Hey Patrick,\r\n\r\nThank you for your help/guidance and for the welcome 🙂\r\n\r\nI think I have created the proper PR for the new dir as you have suggested. Please check it and let me know if it is ok. I am new to PRs so it's still kinda difficult to do it right sometimes 🙂\r\n\r\nWhat can I add there? Can I add my GPT2 implementation there? I should put it in a separate dirs there? Right?\r\n\r\nRegarding your questions for me:\r\n\r\n1) I am sorta working with what is available so atm I just use existing implementations. So I usually use implementation's way of feeding the model. I.e. for my GPT2 implementation, I use the minGPT char-based approach which is painfully slow and inefficient. minGPT does not have BPE yet so I can't really improve it and do it properly as it is very complex for me and difficult. This is why I was very interested in your work cuz you guys provide a standardized and easy way to do it.\r\n\r\nSo basically in my GPT2 implementation I simply feed it the text char tokens. I have attached the example of input/output in my original post. Check it out if you can, please.\r\n\r\n2) I figured that GPT2 is most capable (OpenAI did the same thing with MuseNet). So I was wondering if you guys have a nice GPT2 version that is tuned to the limit. This would really help.\r\nAlso I need better tokenizer but I do not know how to do it. So if you can help/give me specific pointers, I will really appreciate it.\r\n\r\n3) I most certainly heard about the Reformer. And it would be super cool to try it. But again, I have no idea how to make it compatible with the text input/text tokens I use. So if you can help, this also will be much appreciated.\r\n\r\nAgain, thank you for your advice.\r\n\r\nMost sincerely,\r\n\r\nAlex\r\n\r\n\r\n________________________________\r\nFrom: Patrick von Platen <[email protected]>\r\nSent: Monday, December 28, 2020 4:35 AM\r\nTo: huggingface/transformers <[email protected]>\r\nCc: Alex <[email protected]>; Mention <[email protected]>\r\nSubject: Re: [huggingface/transformers] Music Transformers (#9324)\r\n\r\n\r\nHey @asigalov61<https://github.com/asigalov61>,\r\n\r\nI think applying Transformers to Music is a super cool idea! Regarding the best model to use for music composition, IMO it depends strongly on:\r\n\r\n * What is the input to the model? Do you input tokens or float vectors?\r\n * How long is the input? e.g. how many float vectors or tokens? GPT2 is limited to 1024 tokens / float vectors -> is this too short?\r\n * For generation (composition), I think only our autoregressive models make sense: https://huggingface.co/transformers/model_summary.html#autoregressive-models so mostly GPT2. For \"classification\" it would mostly be BERT. If you need very long inputs, it would be interesting to check-out ReformerLM: https://huggingface.co/google/reformer-enwik8\r\n\r\nI bet people would be very interested in Transformers + Music. We've created a new examples folder structure for such projects, so feel free to open a PR to add a dir \"music_transformers\" here: https://github.com/huggingface/transformers/tree/master/examples/research_projects\r\n\r\n—\r\nYou are receiving this because you were mentioned.\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/9324#issuecomment-751698309>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ANNXLI3QSD2OFJT5JQ4FTGLSXB3PVANCNFSM4VLQPGAA>.\r\n",
"Hello! I’d be willing to contribute work in this space if anyone would like to collaborate. In my previous life I was a professional audio engineer, now I’m an enterprise AI systems architect. https://www.paulprae.com/",
"@praeducer Hey Paul!\r\n\r\nThank you for responding to this thread. I would love to collab and create something based on hugginface implementations so if you can help, I would really appreciate it.\r\n\r\nBasically, huggingface docs are very convoluted and unclear to me atm so if you can create a working collab with GPT2 hugginface implementation, I can take it from there and add music parts to it.\r\n\r\nI need something similar to my own GPT2 implementation but based on huggingface so that we can add it here and contribute to their repo/library.\r\n\r\nThis is what I have and this is what I need:\r\nhttps://github.com/asigalov61/Optimus-VIRTUOSO\r\n\r\nAnd my attempt to use huggingface implementation is posted above in the thread so check it out also.\r\n\r\nThanks a lot. Looking forward to working together with like-minded people.\r\n\r\nAlex.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,609 | 1,620 | 1,620 | NONE | null | # 🚀 Feature request
Hello guys! Thanks for your amazing work on the transformers! This is very needed and appreciated :)
I wanted to ask if it is possible to add a section/transformers dedicated specifically to Music. I searched GitHub and your model's repo but I could not find even a single model/solution that would be suitable for music.
NLP models are most capable when it comes to Music AI and I think it would be a great feature/section/branch to investigate/cover.
## Motivation
OpenAI MuseNet and Google Music Transformer. Enough said I think. If you never tried either, you have been really missing out. AFAIK MuseNet is built on a custom GPT2-like model/architecture. And Google used XLNet I think.
## Your contribution
I was able to create a decent model/code/implementation of Music AI based on GPT2 model/architecture. You are welcome to check it out here: https://github.com/asigalov61/Intelligent-VIRTUOSO
I used minGPT implementation to do it and it turned out quite capable and nice :)
However, I do want to ask the following:
1) What is the best Hugginface model/architecture you can recommend for Music AI applications? Please be specific and please give me a simple example of how to try it. This will be very much appreciated. I want to use Hugginface Transformers, so whatever works, please let me know. I am attaching a sample text file so that you can see my encoding, but I can adjust easily to any specs/needs of Huggnface Transformers. I have heard that BERT would be best at something like this but I can be mistaken...
2) Is there a nice Google Colab to try? I would prefer a simple working example to Python repos...
3) What would be the most optimal settings/hyperparameters you can recommend for GPT2 (right now I follow minGPT guidelines) and also what can you recommend to try for the most suitable Huggnface Transformer?
I really hope to hear constructive suggestions/advice because I want to learn and improve my skills and knowledge. Plus I love music almost as much as I love computers so I am quite passionate about both and would love to connect with others who are into Music and AI, if you guys exist...
Thank you very much in advance for your time and responses.
[TMIDI-TXT-Composition (13).txt](https://github.com/huggingface/transformers/files/5745869/TMIDI-TXT-Composition.13.txt)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9324/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9323 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9323/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9323/comments | https://api.github.com/repos/huggingface/transformers/issues/9323/events | https://github.com/huggingface/transformers/pull/9323 | 775,163,463 | MDExOlB1bGxSZXF1ZXN0NTQ1ODk4MTUx | 9,323 | [T5 model parallel] implement input auto-relocation + lots of refactoring/cleanup | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2627272588,
"node_id": "MDU6TGFiZWwyNjI3MjcyNTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel",
"name": "Model Parallel",
"color": "8B66A5",
"default": false,
"description": "Model Parallelilsm Implementations"
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"I don't have an in-depth knowledge of our model parallelism features, so it would be great if @LysandreJik can take a look here as well. \r\n\r\nI think in general, I'm in favor of this PR. However, I'm not sure if a function decorator is better than just having two lines of \r\n```\r\nif self.is_parallel: \r\n # call map to device function\r\n```\r\n\r\nin the respective forward function. We've decided against using function decorators in Pytorch at multiple points (gradient checkpointing e.g.), so I'm not convinced it's the better option to do it here. Function decorators do reduce code readability quite a lot IMO.",
"I'm not sure how your suggestion would work since it needs to be generic, and once inside `forward` the function args are no longer generic. Remember, I'm trying to build a generic functionality that can work out of the box in any `transformers` model and not specific to t5.\r\n\r\nThe other approach that doesn't need a decorator is to override `self.__call__` via `self.parallelize` to set to a variation of this wrapper.\r\n\r\n```\r\n def parallelize(self, device_map=None):\r\n $self.__call__ = model_parallel__call__\r\n [...]\r\n\r\n def deparallelize(self):\r\n $self.__call__ = nn.Module.__call__\r\n [...]\r\n```\r\nand:\r\n```\r\ndef model_parallel__call__(self, *input, **kwargs):\r\n \r\n # get device of any of the params of this layer\r\n try:\r\n device = next(self.parameters(recurse=True)).device\r\n except StopIteration:\r\n device = None\r\n\r\n if device is not None:\r\n\r\n input = list(input)\r\n for i, v in enumerate(input):\r\n if v is not None:\r\n input[i] = v.to(device)\r\n input = tuple(input)\r\n\r\n for k in kwargs.keys():\r\n if kwargs[k] is not None and torch.is_tensor(kwargs[k]):\r\n kwargs[k] = kwargs[k].to(device)\r\n\r\n return nn.Module.__call__(self, *input, **kwargs)\r\n```\r\n(or could save the original `self.__call__` to be more flexible and to allow for others to override this too)\r\n\r\nthis in fact is even better since it will have 0 impact on non-MP functionality as this wrapper will be called only under MP.",
"This is great progress, @stas00! From my perspective, to create a general way of doing model parallelism, we need four things:\r\n* a format for `device_map` that can be used on any model\r\n* `device_map` and `model_parallel` need to be attributes on all models, probably by assigning them to `PreTrainedModel`\r\n* `parallelize()` and `deparallelize()` should be on all models, again probably by assigning them to `PreTrainedModel`\r\n* changes to the forward methods need to be abstracted if at all possible (this is by far the most challenging)\r\n\r\nThis PR makes a lot of progress, the strongest of which is a potential abstraction/simplification of the changes to the forward method. Not sure if a decorator is the solution. @LysandreJik will have that insight when he's back. I like the suggestion by @patrickvonplaten that it's instead a two line implementation `if self.model_parallel` instead of a decorator. But the BIG thing is if most or all of the code in the forward method can be replaced with with something like:\r\n```\r\nif self.model_parallel:\r\n hidden_states, attention_mask, position_bias, encoder_hidden_states, encoder_attention_mask, encoder_decoder_position_bias, head_mask, past_key_value = _call__mp(hidden_states, attention_mask, position_bias, encoder_hidden_states, encoder_attention_mask, encoder_decoder_position_bias, head_mask, past_key_value)\r\n```\r\nIf we can get that right, it might turn model parallelism from a day or weekend project per model into something that takes a few minutes. Much more scalable and sustainable. \r\n\r\nSupporting non-sequential GPUs could be more trouble than its worth -- not entirely sure on this, it's just my instincts. With the billion + parameter models that we're dealing with -- and all indications are that it's only getting bigger going forward -- it's pretty fair to say that most workflows in enterprise and research will be: \r\n\r\n1. develop locally on a machine with one or maybe two GPUs on a small sized version of a model, and \r\n\r\n2. train a final model on a cloud instance or cluster with multiple identical GPUs. \r\n\r\nSequential hand-offs between GPUs will be the norm in cases like that, which I think are going to be most of them. \r\n\r\nThe other thing I worry about is a challenge with PyTorch 1.5 and 1.6 model parallelism behavior. The seemingly redundant clauses and `set_device` statements are there to prevent PyTorch's inferential logic from moving modules or inputs around after `.to()` assignments have been called. It's very annoying. I don't know if it's fixed in 1.7. You'll notice that output layers like the `lm_head` are always on the first device instead of the last device. A more logical workflow would have the embedding layers on the first device and the output layers on the last device. \r\n\r\nI got that to work just fine in forward passes, but I must've tried 10 different ways to get it to behave in backprop before conceding that for whatever reason PyTorch's quantum device superposition just wouldn't allow it. So the output layers are on the same device as the embedding layers. You'd think that matters for load balance between GPUs, and it does -- for gpt2-xl. But since we're practically limited in most situations to PyTorch's 8 GPU per machine preference (inherited from CUDA), by the time you're at 3 billion parameters the embedding and `lm_head` layers are so small in comparison to the attention blocks that it doesn't matter that they're both on the first device, and a custom `device_map` solves the problem for cases where that matters. The implementation implies that there is an extra hand-off or two of a large tensor between GPUs, but I don't think having a perfectly optimal setup will save even 10% on training time. Happy to be proven wrong on this though. What will save a TON of time and $$$ though is deepspeed integration.\r\n\r\nI got t5-11b with 1024 tokens to train quickly on the new p4 instance AWS released last month with its epic 320 GB of GPU memory so I was like \"ok fine whatever... that's pretty good\". ",
"That's awesome, @alexorona. Do continue to share your insights from the frontier!\r\n\r\nLet's wait for @LysandreJik to come back to plan ahead and meanwhile I will experiment with Bart.\r\n\r\n> The seemingly redundant clauses and set_device statements are there to prevent PyTorch's inferential logic from moving modules or inputs around after .to() assignments have been called. It's very annoying. I don't know if it's fixed in 1.7. \r\n\r\nOh, so glad you flagged that. Would it be enough to run the existing parallel tests with pt-1.5, pt-1.6 to detect these failures?\r\n\r\nI'm developing on pt-nightly since rtx-30* work only there (well 1.7.1 should be usable too, but mainly waiting for cuda-11.2 support in pytorch, which is again pt-nightly - won't be in 1.7.x). But it means I can't use it with older pt versions.\r\n\r\nBut since we have to support pt-1.4, I will then put `set_device` back as you had them originally. But this time let's add specific comments why there are there, otherwise someone like myself will think they are some left-overs from earlier experimentation and swipe them away.\r\n\r\nActually, I think we should have a design document where we explain why this or that is done. Rather than make a lot of noise in the model files. A developer-oriented doc. \r\n\r\nThe `set_device` was just one thing, right? Or have I naively nuked any other essentials?\r\n\r\nThanks again!",
"@alexorona, one more question. If pt-1.7+ removes the need for jumping through hoops, as you're suggesting older versions have all kinds of issues, perhaps it'd be a reasonable approach to make MP in `transformers` require pt-1.7?\r\n\r\nIf and when you get some time could you please test if what wasn't working in pt < 1.7 works in pt-1.7? And if not - perhaps we need to file some Issues with pytorch if there are bugs to be solved.\r\n\r\nThank you.",
"@stas00 Will try to do so, but in the middle of moving so I don't think I'll get to this until the end of January at the soonest. The team would have to make the call about only support model parallelism for PyTorch >= 1.7.0 if it won't work on earlier versions. I would be very tempted to support that idea, but don't have enough usage information to know what the impact would be.",
"I guess once everybody is back next week we can start having some discussion with the HF team.\r\n\r\nHave an easy move!",
"@stas00 Yeah, should be able to get some input when everyone is back. In the meantime, I'm still not sure on the final form of the `device_map`. There are two issues left to work out: \r\n\r\n1. Some models don't have decoder architectures\r\n2. No ability to map embeddings and output layers (always on first device), which _might_ be just fine. I think most output layers and embeddings are going to be comparatively small to attention blocks going forward, but we should confirm that. We are allowing people to create custom a `device_map` that should enable them to get around any potential situations where the first device is becoming overloaded.\r\n\r\nTo confirm, this looks good for decoder architectures:\r\n```\r\ndevice_map = {\r\n\tencoder: {\r\n\t\t\t0: [0, 1, 2, 3, 4, 5],\r\n\t\t\t1: [6, 7, 8, 9, 10, 11]\r\n\t\t\t},\r\n\tdecoder: {\r\n\t\t\t2: [0, 1, 2],\r\n\t\t\t3: [3, 4, 5]\r\n\t\t\t}\r\n}\r\n```\r\n\r\nMaybe we use the keys to map to the attribute? In gpt2, `self.h` contains the attention blocks, so:\r\n```\r\ndevice_map = {\r\n\th: {\r\n\t\t0: [0, 1, 2, 3, 4, 5],\r\n\t\t1: [6, 7, 8, 9, 10, 11]\r\n\t\t}\r\n}\r\n```\r\nIn trying to generalize `parallelize()`, we still need access to list of all modules. For example, in `GPT2LMHeadModel`, we would need to know: `self.lm_head`, `self.transformer.h`, `self.transformer.wte`, `self.transformer.wpe` and `self.transformer.ln_f`. ",
"I haven't looked into gpt2, yet. t5 and bart are very similar structure-wise. We probably need to map out all the different archs `transformers` has and then generalize.\r\n\r\nWhat is emerging so far is that the device map might have various keys, none required, and each model architecture will have:\r\n\r\n1. its required keys\r\n2. its own default map generator - so that the user doesn't have to provide one and overtime it can be improved to have smarts to create a balanced map based on the \"insider\" information.\r\n\r\nSo if some architectures need to explicitly manage the mapping of non-block/layers, rather than just assigning them by default on the \"`main_device`\", because they are significantly big, they could do that too. Otherwise, leave the `main_device` to all the \"smallish-fish\" and use the other devices for \"the big fish\" if that makes sense. The main advantage of this \"lazy\" approach is that there is less device-hopping and less code needed to match the hopping.",
"Yes, that's right. \r\n\r\nSo it turns out the `self._modules` attribute has all of the modules. To move `parallelize()` to `PreTrainedModel`, I think all we need is a per-model `module_map` object to map between the `device_map` and the model placements. With a little work, we might be able to reduce making a model parallel to:\r\n\r\n1. Adding a few lines of code in the forward method per your work\r\n2. Modifying the validation function to check for errors in a custom `device_map`\r\n3. Creating a `module_map` dictionary for that model and adding it to the `get_module_map()` function \r\n\r\nWe can embed special placement rules where non-attention block modules need to be on the same device as another module by creating a tuple in `module_map['dependent_modules']`:\r\n\r\n```\r\n# Device map for GPT2LMHead. T5 would have 'encoder', 'decoder' as keys instead of 'h' and validate_device_map would\r\n# check to see if the device_map has the right keys.\r\ndevice_map = {\r\n\t'h': {\r\n\t\t\t0: [0, 1, 3, 4],\r\n\t\t\t1: [5, 6, 7, 8],\r\n\t\t\t2: [9, 10, 11, 12]\r\n\t}\r\n}\r\n\r\n\r\nclass PreTrainedModel():\r\n...\r\n\t# Probably use get_model_map(), but just to make it simple:\r\n\tself.module_map = {\r\n\t\t'h': self._modules['transformer'].h,\r\n\t\t'embeddings': [\r\n\t\t\t\t\tself._modules['transformer'].wte,\r\n\t\t\t\t\tself._modules['transformer'].wpe\r\n\t\t\t\t\t],\r\n\t\t'dependent_modules': [\r\n\t\t\t\t\t(\r\n\t\t\t\t\t\tself._modules['transformer'].ln_f], \r\n\t\t\t\t\t\tmodel._modules['transformer'].h[-1],\r\n\t\t\t\t\t),\r\n\t\t\t\t\t(\r\n\t\t\t\t\t\tself._modules['lm_head'],\r\n\t\t\t\t\t\tself._modules['transformer'].wte\r\n\t\t\t\t\t)\r\n\t\t\t\t]\r\n\t}\r\n\r\ndef parallelize(self, device_map = None):\r\n\t\r\n\tself.device_map = device_map\r\n\r\n\t# validate_device_map extended to check for valid keys for model\r\n\t\r\n\t...\r\n\r\n\t# Set all embeddings to first device\r\n\tif 'embeddings' in self.module_map:\r\n\t\tfor layer in self.module_map['embeddings'].items():\r\n\t\t\tlayer.to(self.first_device)\r\n\r\n\r\n\t# Assign attention blocks to the appropriate device.\r\n\tfor module_group, group_map in self.device_map.items():\r\n\t for device, layers in group_map.items():\r\n\t for layer in layers:\r\n\t self.module_map[module_group][layer].block_parallelize(f\"cuda:{device}\")\r\n\r\n\r\n\t# Some modules should always be on the same device as another module. We can express \r\n\t# this as a tuple pair where tuple[0] needs to be on tuple[1]\r\n\tif 'dependent_modules' in self.module_map:\r\n\t\t for i in self.module_map['dependent_modules']:\r\n\t\t \ti[0].to(i[1].device)\r\n```\r\n",
"All, awesome suggestions that should be looked at next once the current work has been merged.\r\n\r\nI'm going to wait implementing anything new, since there are already too many partial PRs that need to be carefully merged and rebased and once that is done we can do another round of generalization integrating your suggestions.",
"So if there is no objection, I will merge this one, and then start integrating with https://github.com/huggingface/transformers/pull/9384, which is ahead functionality-wise - so I want to sync the two, switching t5 to the improved version of MP backend. I will implement the suggestions in that new PR.",
"As we have discovered the original PR didn't make t5 work with trainer. I have just fixed that in the last commit here, bringing some goodies over from the Bart MP PR.\r\n\r\nSo this now works:\r\n\r\n```\r\nexport BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 200 --n_val 200 --n_test 200 --fp16 --save_steps 2 --model_parallel\r\n```\r\n\r\nBut! while it's fine in the training stage, it's 10x slower on eval than w/o `--model_parallel`",
"Hello @stas00 kudos to all the hard work you do, especially around continuing the ambitious work around supporting parallelism. \r\n\r\nInterested in doing some inference with the t5-11b model variant.\r\nCan you provide some insights on how many gpus would be needed to achieve that?\r\n\r\nI tried this branch with 8xV100 (16gb) on GCE. \r\nAll good while I created the model and called parallelize, but got a out of memory error on inference step when moving inputs to the first gpu device.\r\n\r\nLet me know if I have a wrong mental model about achieving this. Thanks again!\r\n\r\n\r\n",
"Thank you for the kind words, @kznmft!\r\n\r\nPlease have a look at https://github.com/huggingface/transformers/pull/9765\r\nwhich implements a very inefficient in my opinion but nevertheless working pipeline parallelism on t5, which should be superior to this naive implementation, speed-wise but it's not quite there yet. Please read the first post carefully for all the details. and you can see the follow up comments with the experiments that have been done. So 4x40gb A100s gpus weren't enough for t5-11b in initial experiments. But 5-6 of those probably should be enough.\r\n\r\nI finally got access to a machine with 4 gpus just now, so I'm going to start looking at implementing 2D parallelism - using Pipeline with DeepSpeed ZeRO-DP, so I will post news once I get something working.\r\n\r\nSubscribe to watch https://github.com/huggingface/transformers/pull/9765 and I most likely will update that PR with new info or a link to a new PR once I have something working.\r\n\r\n-----\r\n\r\n> I tried this branch with 8xV100 (16gb) on GCE.\r\n> All good while I created the model and called parallelize, but got a out of memory error on inference step when moving inputs to the first gpu device.\r\n\r\nBut you're not telling me the device map you were using. You need to spread out the layers over the 8 gpus, have you done it? unless you were relying on the default map which should spread things out.\r\n\r\nThe problem is that it doesn't take into an account that gpu 0 is always overtaxed, so I'd always try a few layers less on the first gpu 0. And then watch nvidia-smi (and later we will have better tools) to see that you get each GPU getting a somewhat equal memory allocation.\r\n\r\nBut if 4x40 couldn't fit it, I doubt that 8x16 will.\r\n\r\nRemember in t5-11b you have 45GB of params, plus optimizer states plus gradients.\r\n\r\nAlso probably need to try to use a more lean optimizer, say Adam instead of AdamW which needs more memory.\r\n\r\n\r\n\r\n",
"too long. closing."
] | 1,609 | 1,622 | 1,622 | CONTRIBUTOR | null | As I commented on in another incarnation of generalizing t5 model parallelism https://github.com/huggingface/transformers/pull/9316 so that it could be easily ported to other models I realized that it's quite unnecessary to try and remap inputs to specific devices where they will be needed in the future ahead of time. Since we have `forward` where we have access to the device of the parameters of that layer - we can completely automate the relocation of inputs to the correct devices just before `forward` is called. So this PR builds upon https://github.com/huggingface/transformers/pull/9316 and:
* [x] creates `@model_parallel_inputs_to_device` decorator used for `forward`, which automatically takes any inputs and puts them on the same device as the parameters of that layer. This allowed a complete removal of most of the `.to()` juggling logic for inputs, which was quite complex and noisy.
* [x] a lot of refactoring to make the MP as little invasive and noisy as possible, and fixing some small issues on the way.
I have tested this with:
```
pyt -sv tests/test_modeling_t5.py -k parallel
```
Which I'm not sure covers all bases, but the above tests pass.
@alexorona, please let me know what you think. And if you have real applications besides the great tests you wrote please see if it still works correctly. (It was so awesome having those tests in place! Thank you!) If it looks good and others support this proposal we can then look at doing the same for gpt2 and meanwhile I will look at bart.
@patrickvonplaten, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9323/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9323/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9323",
"html_url": "https://github.com/huggingface/transformers/pull/9323",
"diff_url": "https://github.com/huggingface/transformers/pull/9323.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9323.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9322 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9322/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9322/comments | https://api.github.com/repos/huggingface/transformers/issues/9322/events | https://github.com/huggingface/transformers/issues/9322 | 775,131,246 | MDU6SXNzdWU3NzUxMzEyNDY= | 9,322 | Conda dependencies conflict with pip dependencies | {
"login": "ZOUG",
"id": 2490328,
"node_id": "MDQ6VXNlcjI0OTAzMjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2490328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZOUG",
"html_url": "https://github.com/ZOUG",
"followers_url": "https://api.github.com/users/ZOUG/followers",
"following_url": "https://api.github.com/users/ZOUG/following{/other_user}",
"gists_url": "https://api.github.com/users/ZOUG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZOUG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZOUG/subscriptions",
"organizations_url": "https://api.github.com/users/ZOUG/orgs",
"repos_url": "https://api.github.com/users/ZOUG/repos",
"events_url": "https://api.github.com/users/ZOUG/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZOUG/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @ZOUG,\r\n\r\nThanks for the issue. @LysandreJik is on holiday at the moment, but I'm sure he's more than happy to take a look when he's back :-) ",
"Hello! We have started officially maintaining the anaconda packages in version v4.0.0. Installing a version anterior to that one would result in you using the `transformers` version from another channel (such as `conda-forge`), which we do not maintain.\r\n\r\nDo you get the same error when installing `transformers` from our channel (on a more recent version)?",
"> Hello! We have started officially maintaining the anaconda packages in version v4.0.0. Installing a version anterior to that one would result in you using the `transformers` version from another channel (such as `conda-forge`), which we do not maintain.\r\n> \r\n> Do you get the same error when installing `transformers` from our channel (on a more recent version)?\r\n\r\nNo, the error does not occur on the most recent version. The problem is that packages dependent on `transformers` may not be compatible with v4.x at the moment so that the error will still arise. It might be better to provide a pip installation package for v3.5.1 that is compatible with the `conda-forge` dependencies.\r\n\r\nIn my case, I got lucky that the package that I need just released a new version today that is compatible with transformers v4.x.",
"I believe this is still the case in Docker-based environments (ex. Kaggle). I removed existing transformers and tokenizers, installed new ones (transformers 4.2.1 and tokenizers 0.9.4). \r\nIn the code, it goes back to conda and complains about tokenizers being 0.9.3\r\n```\r\n/opt/conda/lib/python3.7/site-packages/transformers/__init__.py in <module>\r\n 41 \r\n 42 # Check the dependencies satisfy the minimal versions required.\r\n---> 43 from . import dependency_versions_check\r\n 44 from .file_utils import (\r\n 45 _BaseLazyModule,\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/dependency_versions_check.py in <module>\r\n 39 continue # not required, check version only if installed\r\n 40 \r\n---> 41 require_version_core(deps[pkg])\r\n 42 else:\r\n 43 raise ValueError(f\"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py\")\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/utils/versions.py in require_version_core(requirement)\r\n 92 \"\"\" require_version wrapper which emits a core-specific hint on failure \"\"\"\r\n 93 hint = \"Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master\"\r\n---> 94 return require_version(requirement, hint)\r\n 95 \r\n 96 \r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/utils/versions.py in require_version(requirement, hint)\r\n 85 if want_ver is not None and not ops[op](version.parse(got_ver), version.parse(want_ver)):\r\n 86 raise pkg_resources.VersionConflict(\r\n---> 87 f\"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}\"\r\n 88 )\r\n 89 \r\n\r\nVersionConflict: tokenizers==0.9.4 is required for a normal functioning of this module, but found tokenizers==0.9.3.\r\n``` \r\nEdit: found a work around to re-import modules:\r\n```\r\nimport importlib, pkg_resources, tokenizers\r\nimportlib.reload(pkg_resources)\r\nimportlib.reload(tokenizers)\r\n``` \r\ntqdm may also complain if 4.50 or later.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Issue still persists\r\n\r\n---------------------------------------------------------------------------\r\nVersionConflict Traceback (most recent call last)\r\n<ipython-input-24-3b738e6ed358> in <module>\r\n----> 1 from transformers import PreTrainedTokenizerFast\r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _find_and_load(name, import_)\r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)\r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _load_unlocked(spec)\r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _load_backward_compatible(spec)\r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/__init__.py in <module>\r\n 41 \r\n 42 # Check the dependencies satisfy the minimal versions required.\r\n---> 43 from . import dependency_versions_check\r\n 44 from .file_utils import (\r\n 45 _BaseLazyModule,\r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _find_and_load(name, import_)\r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)\r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _load_unlocked(spec)\r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/importlib/_bootstrap.py in _load_backward_compatible(spec)\r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/dependency_versions_check.py in <module>\r\n 39 continue # not required, check version only if installed\r\n 40 \r\n---> 41 require_version_core(deps[pkg])\r\n 42 else:\r\n 43 raise ValueError(f\"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py\")\r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/utils/versions.py in require_version_core(requirement)\r\n 92 \"\"\" require_version wrapper which emits a core-specific hint on failure \"\"\"\r\n 93 hint = \"Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master\"\r\n---> 94 return require_version(requirement, hint)\r\n 95 \r\n 96 \r\n\r\n/app1/anaconda3/envs/praveen_tfu/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/utils/versions.py in require_version(requirement, hint)\r\n 85 if want_ver is not None and not ops[op](version.parse(got_ver), version.parse(want_ver)):\r\n 86 raise pkg_resources.VersionConflict(\r\n---> 87 f\"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}\"\r\n 88 )\r\n 89 \r\n\r\nVersionConflict: tokenizers==0.9.4 is required for a normal functioning of this module, but found tokenizers==0.11.6.\r\nTry: pip install transformers -U or pip install -e '.[dev]' if you're working with git master\r\n"
] | 1,609 | 1,647 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1, 3.3.1
- Platform: Windows 10, Anaconda
- Python version: 3.8
## Information
I'm installing a package built on top of `transformers` v3 in an Anaconda environment. The package is not available on Anaconda Cloud so I have to install it via `pip`. According to [the best practice](https://www.anaconda.com/blog/using-pip-in-a-conda-environment), I try to install as many requirements as possible with `conda`, including `transformers`. However, it turns out that the conda dependencies conflict with pip dependencies for `transformers` so that pip would try to downgrade the conda-installed `tokenizers` package, which `transformers` depends on.
The dependency information is as follows:
<table style="width:100%">
<tr>
<th>`transformers` version</th>
<th>conda dependency</th>
<th>pip dependency</th>
</tr>
<tr>
<td>3.5.1</td>
<td>tokenizers==0.9.4</td>
<td>tokenizers==0.9.3</td>
</tr>
<tr>
<td>3.3.1</td>
<td>tokenizers==0.9.3</td>
<td>tokenizers==0.8.1rc2</td>
</tr>
</table>
I can't seem to find any resolution other than leaving the `transformers` installation to pip completely. Is there any other possible resolution?
### Who can help
Maybe @mfuntowicz can help
## To reproduce
Steps to reproduce the behavior:
1. Run "conda install transformers=3.5.1" or "conda install transformers=3.3.1"
2. Run "pip check"
## Expected behavior
Make conda dependencies compatible with pip dependencies. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9322/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9321 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9321/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9321/comments | https://api.github.com/repos/huggingface/transformers/issues/9321/events | https://github.com/huggingface/transformers/issues/9321 | 775,128,259 | MDU6SXNzdWU3NzUxMjgyNTk= | 9,321 | Splitting texts longer that `tokenizer.max_length` into blocks of same size | {
"login": "hebecked",
"id": 12817632,
"node_id": "MDQ6VXNlcjEyODE3NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/12817632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hebecked",
"html_url": "https://github.com/hebecked",
"followers_url": "https://api.github.com/users/hebecked/followers",
"following_url": "https://api.github.com/users/hebecked/following{/other_user}",
"gists_url": "https://api.github.com/users/hebecked/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hebecked/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hebecked/subscriptions",
"organizations_url": "https://api.github.com/users/hebecked/orgs",
"repos_url": "https://api.github.com/users/hebecked/repos",
"events_url": "https://api.github.com/users/hebecked/events{/privacy}",
"received_events_url": "https://api.github.com/users/hebecked/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this notebook could help you: https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb you should check out the `def tokenizer(...)`, `def group_texts(...)` functions. I think they should help at what you want to achieve. ",
"Thank you for the fast response @patrickvonplaten.\r\nI reviewed your link only to find out it was an input problem on my side that I did not see before. Sorry to bother you for that.\r\nJust in case anyone comes across a similar issue here is the solution I found to be working for me.\r\n\r\n```\r\nclass german_bert_sentiment: \r\n\t\"\"\"\r\n\tSentiment analyzer module based on a range of sources including twitter, facebook, product reviews\r\n\thttps://huggingface.co/oliverguhr/german-sentiment-bert?text = Du+Arsch%21\r\n\t\"\"\"\r\n\r\n\tdef __init__(self, truncate=False):\r\n\t\tself.tokenizer = AutoTokenizer.from_pretrained(\"oliverguhr/german-sentiment-bert\")\r\n\t\tself.model = AutoModelForSequenceClassification.from_pretrained(\"oliverguhr/german-sentiment-bert\")\r\n\t\tself.truncate=truncate\r\n\t\tself.max_length=512\r\n\r\n\tdef analyze(self, text):\r\n\t\taverages=[]\r\n\t\terrors=[]\r\n\t\tinputs = self.tokenizer(text, return_tensors = \"pt\")#, max_length=512, stride=0, return_overflowing_tokens=True, truncation=True, padding=True)\r\n\t\tlength=len(inputs['input_ids'][0])\r\n\t\twhile length>0:\r\n\t\t\tif length>self.max_length:\r\n\t\t\t\tnext_inputs={k: (i[0][self.max_length:]).reshape(1,len(i[0][self.max_length:])) for k, i in inputs.items()}\r\n\t\t\t\tinputs={k: (i[0][:self.max_length]).reshape(1,len(i[0][:self.max_length])) for k, i in inputs.items()}\r\n\t\t\telse:\r\n\t\t\t\tnext_inputs=False\r\n\t\t\tproOrCon = self.model(**inputs)\r\n\t\t\tweights = proOrCon[0].detach().numpy()[0]\r\n\t\t\tweights[2], weights[1] = weights[1], weights[2]\r\n\t\t\tweights = softmax(weights)\r\n\t\t\taverage=np.average(np.linspace(1, -1, 3), weights = weights)\r\n\t\t\taverages.append(average)\r\n\t\t\terrors.append(\r\n\t\t\t\tnp.sqrt(np.average(np.array(np.linspace(1, -1, 3)-average)**2, weights = weights))\r\n\t\t\t\t)\r\n\t\t\t#from IPython import embed; embed()\r\n\t\t\tif self.truncate:\r\n\t\t\t\tbreak\r\n\t\t\tif next_inputs:\r\n\t\t\t\tinputs=next_inputs\r\n\t\t\telse:\r\n\t\t\t\tbreak\r\n\t\t\tlength=len(inputs['input_ids'][0])\r\n\t\taverage = np.average(averages, weights = 1./np.array(errors)**2)\r\n\t\terror = np.sqrt(1./np.sum(1./np.array(errors)**2))\r\n\t\treturn [average, error]\r\n```\r\n "
] | 1,609 | 1,609 | 1,609 | NONE | null | ## Environment info
`transformers-cli env` raises an ModuleNotFoundError, though I don't think it is relevant for my problem.
- `transformers` version: 4.0.0
- Platform: Arch Linux x86_64
- Python version: 3.9.1
- CPU only
### Who can help
It's a probably trivial tokenizer problem: @mfuntowicz
using a pretrained bert: @LysandreJik
## Information
I'm running successfully (exemplary for several models):
`
tokenizer = AutoTokenizer.from_pretrained("oliverguhr/german-sentiment-bert")
model = AutoModelForSequenceClassification.from_pretrained("oliverguhr/german-sentiment-bert")
inputs = tokenizer(text, return_tensors = "pt")
proOrCon = self.model(**inputs)
`
Now I have several `text`s that produce more than 512 tokens. I tried to split the `inputs` manually by copying and modifying as well as creating a dict in the same format, but apparently, the class object stores additional information that are required and not easily accessible.
I Also tried built-in functions from the Tokenizer:
`inputs = tokenizer(text, return_tensors = "pt", max_length=512, stride=0, return_overflowing_tokens=True, truncation=True, padding=True)
mapping = inputs.pop('overflow_to_sample_mapping')
`
But I don't get how to use the mapping for the next iteration, It's just a tensor with as many entries as tokens, counting up from 0.
I've looked at the documentation (@sgugger) here https://huggingface.co/transformers/internal/tokenization_utils.html and here https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer but the output format does not exactly match my results, since I don't get the overflowing tokens, just the mapping. I tried to look at the flair library as well, since it already implements something similar for transformers but their approach seems to be for another data format too. ( https://github.com/flairNLP/flair/blob/4d1bfec296ae8000268f8bbf62d71042e3714ace/flair/embeddings/token.py#L949 )
Can someone tell me what I am doing wrong? I just want to split the tokens in sizes that a bert model (512) can handle (blocks or sliding-window, I will have to test what works best). I didn't think it would be that hard, but I spent already a whole day on this.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9321/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9320 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9320/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9320/comments | https://api.github.com/repos/huggingface/transformers/issues/9320/events | https://github.com/huggingface/transformers/pull/9320 | 775,105,631 | MDExOlB1bGxSZXF1ZXN0NTQ1ODU1NjEy | 9,320 | [Seq2SeqTrainer] Fix Typo | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a bug when one does not want to use `generate()` to evaluate in Seq2SeqTrainer. This PR probably deserves a test, but leaving this for a future PR when @sgugger is back.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9320/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9320/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9320",
"html_url": "https://github.com/huggingface/transformers/pull/9320",
"diff_url": "https://github.com/huggingface/transformers/pull/9320.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9320.patch",
"merged_at": 1609102670000
} |
https://api.github.com/repos/huggingface/transformers/issues/9319 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9319/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9319/comments | https://api.github.com/repos/huggingface/transformers/issues/9319/events | https://github.com/huggingface/transformers/issues/9319 | 775,070,189 | MDU6SXNzdWU3NzUwNzAxODk= | 9,319 | Some weights of AlbertForPreTraining were not initialized from the model checkpoint at albert-base-v2 and are newly initialized: ['sop_classifier.classifier.weight', 'sop_classifier.classifier.bias'] | {
"login": "L-Zhe",
"id": 46775682,
"node_id": "MDQ6VXNlcjQ2Nzc1Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/46775682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/L-Zhe",
"html_url": "https://github.com/L-Zhe",
"followers_url": "https://api.github.com/users/L-Zhe/followers",
"following_url": "https://api.github.com/users/L-Zhe/following{/other_user}",
"gists_url": "https://api.github.com/users/L-Zhe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/L-Zhe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/L-Zhe/subscriptions",
"organizations_url": "https://api.github.com/users/L-Zhe/orgs",
"repos_url": "https://api.github.com/users/L-Zhe/repos",
"events_url": "https://api.github.com/users/L-Zhe/events{/privacy}",
"received_events_url": "https://api.github.com/users/L-Zhe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm not sure we have a completely fine-tuned SOP classification model. My best advice is to try out different models and see which model not randomly allocates the weights for those layers."
] | 1,609 | 1,610 | 1,610 | NONE | null | Albert leverage sentence coherence predict loss(SOP) to optimizer its parameters, and I wanna employ it to score the coherence between two sentences. But when I use AlbertForPreTraining to load albert-xxlarge-v2 checkpoint, it reminds me that:
_Some weights of AlbertForPreTraining were not initialized from the model checkpoint at albert-base-v2 and are newly initialized: ['sop_classifier.classifier.weight', 'sop_classifier.classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference._
I try many times and find the output of each time is different, this means that the final linear layer for classification has not been loaded and just initial by random. I wanna know how to leverage pretrain SOP classification without fine-tuning?
Hope to get response rapidly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9319/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9319/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9318 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9318/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9318/comments | https://api.github.com/repos/huggingface/transformers/issues/9318/events | https://github.com/huggingface/transformers/issues/9318 | 775,052,846 | MDU6SXNzdWU3NzUwNTI4NDY= | 9,318 | Fail when running the multimodal example | {
"login": "jc-hou",
"id": 30210529,
"node_id": "MDQ6VXNlcjMwMjEwNTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jc-hou",
"html_url": "https://github.com/jc-hou",
"followers_url": "https://api.github.com/users/jc-hou/followers",
"following_url": "https://api.github.com/users/jc-hou/following{/other_user}",
"gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions",
"organizations_url": "https://api.github.com/users/jc-hou/orgs",
"repos_url": "https://api.github.com/users/jc-hou/repos",
"events_url": "https://api.github.com/users/jc-hou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jc-hou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That example is unfortunately unmaintained. Have you tried playing around with LXMERT, which is also a multi-modal model? There is a demo available [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/lxmert).",
"Oh, I didn't know there is one with LXMERT. I will try that. Thanks.",
"You can make it work with a little inference call modification. \r\nAdd **\"return_dict\": False** to **inputs** dict.\r\nLike this:\r\n```\r\ninputs = {\r\n \"input_ids\": batch[0],\r\n \"input_modal\": batch[2],\r\n \"attention_mask\": batch[1],\r\n \"modal_start_tokens\": batch[3],\r\n \"modal_end_tokens\": batch[4],\r\n \"return_dict\": False\r\n }\r\noutputs = model(**inputs)\r\n```"
] | 1,609 | 1,620 | 1,609 | NONE | null | Hi,
I tried to run the [multimodal example](https://github.com/huggingface/transformers/tree/master/examples/research_projects/mm-imdb).
By running:
```
python run_mmimdb.py \
--data_dir ../dataset/ \
--model_name_or_path bert-base-uncased \
--output_dir ../output \
--do_train \
--do_eval \
--max_seq_len 512 \
--gradient_accumulation_steps 20 \
--num_image_embeds 3 \
--num_train_epochs 100 \
--patience 5 \
--overwrite_output_dir
```
I met the following error message:
```
12/27/2020 16:01:33 - INFO - __main__ - ***** Running training *****
12/27/2020 16:01:33 - INFO - __main__ - Num examples = 15513
12/27/2020 16:01:33 - INFO - __main__ - Num Epochs = 100
12/27/2020 16:01:33 - INFO - __main__ - Instantaneous batch size per GPU = 8
12/27/2020 16:01:33 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 160
12/27/2020 16:01:33 - INFO - __main__ - Gradient Accumulation steps = 20
12/27/2020 16:01:33 - INFO - __main__ - Total optimization steps = 9700
Epoch: 0%| | 0/100 [00:00<?, ?it/s/data/stars/user/jhou/collection-stars/anaconda3/envs/pytorch_stable_171_pip_cu102/lib/python3.6/site-packages/PIL/Image.py:2837: DecompressionBombWarning: Image size (96592500 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
DecompressionBombWarning,
Iteration: 0%| | 0/1940 [00:02<?, ?it/s]
Epoch: 0%| | 0/100 [00:02<?, ?it/s]
Traceback (most recent call last):
File "run_mmimdb.py", line 572, in <module>
main()
File "run_mmimdb.py", line 525, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer, criterion)
File "run_mmimdb.py", line 151, in train
outputs = model(**inputs)
File "/data/stars/user/jhou/collection-stars/anaconda3/envs/pytorch_stable_171_pip_cu102/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/stars/user/jhou/collection-stars/anaconda3/envs/pytorch_stable_171_pip_cu102/lib/python3.6/site-packages/transformers/models/mmbt/modeling_mmbt.py", line 366, in forward
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
File "/data/stars/user/jhou/collection-stars/anaconda3/envs/pytorch_stable_171_pip_cu102/lib/python3.6/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'MMBTForClassification' object has no attribute 'config'
```
torch:1.7.1
transformers:4.0.1
I tried with torch:1.7.0, transformers:4.1.0, also failed with the same error.
Any adivce?
Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9318/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9317 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9317/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9317/comments | https://api.github.com/repos/huggingface/transformers/issues/9317/events | https://github.com/huggingface/transformers/issues/9317 | 775,045,027 | MDU6SXNzdWU3NzUwNDUwMjc= | 9,317 | Bug: metrics inside on_evalute callback is passed wrongly | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Issue resolved with adding this line in evaluate function:\r\n\r\n self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, results)\r\n",
"reopened, since this is not solved after this line, indeed this looks like a bug, could you have a look please?",
"Hello, could you please put all of your environment information as asked in the template, as well as the command you used to launch the script? We need this in order to help you. Thank you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"In my case, I found that trainer does not do the evaluation event at the end of each epoch even I set the trainer.args eval_stategy = epoch. So maybe we find the log info such as {eval_loss: xxx, eval_acc: xxx, **epoch: 1.98**}. And finally I prefer to use callback which is defined by myself to log metric."
] | 1,609 | 1,697 | 1,619 | NONE | null | Hi
this is very helpful to save all metrics every eval_step with evaluation_stategy = steps, for this I wrote a callback as follows to access metrics inside this callback:
```
class EvaluationCallback(TrainerCallback):
def on_evaluate(self, args, state, control, **kwargs):
print("### kwargs ", kwargs['metrics']) #
```
{'eval_loss': 972.89990234375, 'eval_acc': 0.0}
I pass this callback to the trainer.py
from what I see this metrics does not match the output of evaluate function, for instance in my case this is
`{'boolq_eval_loss': 525.3097534179688, 'boolq_eval_acc': 60.6, 'rte_eval_loss': 972.89990234375, 'rte_eval_acc': 0.0}
`
could you tell me how I can access output of evaluate() inside this callback as this is? thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9317/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9316 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9316/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9316/comments | https://api.github.com/repos/huggingface/transformers/issues/9316/events | https://github.com/huggingface/transformers/pull/9316 | 774,980,122 | MDExOlB1bGxSZXF1ZXN0NTQ1NzY2NzM2 | 9,316 | [t5 model parallel] misc fixes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2627272588,
"node_id": "MDU6TGFiZWwyNjI3MjcyNTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel",
"name": "Model Parallel",
"color": "8B66A5",
"default": false,
"description": "Model Parallelilsm Implementations"
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Somehow I'm feeling that this approach of having special logic to remap inputs ahead of time is over-complicated. I haven't tried it yet, but won't it be much much simpler to remap inputs once the model layer is visible and just before they are used by that layer - i.e. at the point where one gets:\r\n```\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!\r\n```\r\nThen we just do `input_foo = input_foo.to(next(self.parameters()).device)` and we are done? No logic required other than just put the inputs on the same device as this layer.\r\n\r\nWe might be able to even remove all those `if self.model_parallel` in most places in `forward`, and have the same code for w/ or w/o MP. Perhaps with some wrapper that will be noop when not under MP. It could also handle `None` to avoid a gazillion of `if not None` checks. I'd also make it an in-place operation, just like `nn.Module.to` does.\r\n\r\n\r\n**edit** I branched off from this PR and implemented this - works amazingly simple: https://github.com/huggingface/transformers/pull/9323",
"I also think #9323 is the way to go here",
"too long. closing."
] | 1,609 | 1,622 | 1,622 | CONTRIBUTOR | null | This PR:
* in 2 places fixes an assumption that devices on the device map are always ` (0, 1, 2, 3)` and:
1. are ordered by their cuda device id and not `(2, 3, 0, 1)`
2. have a stride of 1 and not `(0, 2, 3, 5) `
* adds a missing `to()`, removes a redundant `to()`
* removes obvious comments
* removes code that gets run twice
* this PR continues at #9323 - I branched off from this PR and implemented an automatic remap of inputs and a lot refactoring.
I will comment on the reasons for changes in the code.
There is one gotcha wrt py36 w/o cython not having its dict ordered. Please see https://github.com/huggingface/transformers/pull/9316#discussion_r549068073
I think sorting out the logic first device/last device/is_this_the_last_layer_of_this_device and such logic should be abstracted away for readability, and not needing to replicate the same logic in each model. Perhaps `self.device_map` should be a smart class that can provide all the answers via its methods.
@alexorona, I'm studying your t5-mp implementation to do the same for bart. Thank you for doing the hard work of putting the foundation in place and porting 2 models!!!
Please have a look and let me know if my tweaks make sense. Your original code is excellent - I'm just trying to think how to make it easier to replicate it in other models and improve readability, hence a gazillion of questions/suggestions.
Also, if you don't mind I have a few design questions:
1. Could you please comment on why you are splitting the encoder between all devices on the device map and the same for the decoder? Won't it be more efficient performance-wise to put the encoder on the first group of devices and decoder on the second?
2. I also find it confusing that the device map doesn't map out the whole model, but just the encoder and assumes that the decoder has the same config. I'm not familiar with t5 but other models definitely can have encoder and decoder that don't at all match number of layers-wise. And while this is perhaps not the case for t5, I think the device map should be intuitively similar for all models as we slowly progress with porting other models to MP. That is I think it should include all layers of the model and not half of them.
@patrickvonplaten, @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9316/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9316",
"html_url": "https://github.com/huggingface/transformers/pull/9316",
"diff_url": "https://github.com/huggingface/transformers/pull/9316.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9316.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9315 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9315/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9315/comments | https://api.github.com/repos/huggingface/transformers/issues/9315/events | https://github.com/huggingface/transformers/issues/9315 | 774,968,494 | MDU6SXNzdWU3NzQ5Njg0OTQ= | 9,315 | [model site] search UI: language: tags, directionality and filtering | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc'ing @gary149 and @beurkinger ",
"Hmm, I just discovered I had 2 related issues opened some months back:\r\n- https://github.com/huggingface/transformers/issues/8531\r\n- https://github.com/huggingface/transformers/issues/7206"
] | 1,609 | 1,616 | 1,616 | CONTRIBUTOR | null | I tried to use the models site to find which models I can use for translation of specific languages, and here are the issues I have encountered while doing that:
1. many HF created models aren't tagged for the languages they are trained for - e.g. t5: https://github.com/huggingface/transformers/issues/9314 - would it be possible to go over the HF-created main models and ensure they are clearly tagged with languages they were trained for? These get downloaded a lot, so putting a bit of metadata will go a long way of making user's life a bit easier.
2. the language tags presume bi-direction, but many models have been trained in one direction only - e.g. most wmt and most t5 models. Would it be helpful to support not just the language tags but also the directional language tags if they are one-way only?
The t5 models are one direction only, so probably need one direction tags. Not sure how that would work with Language tags in search UI. Perhaps it's enough to indicate that in README, but as the number of models grows being able to quickly filter what's needed will save a lot of user's time, so perhaps planning ahead would be useful. i.e. if I need to perform a FR to EN translation, the user may benefit from getting hits for only models that can do that.
3. wrt search UI - I don't understand how a handful of special language tags is selected - there are about a dozen of language tags in the search API dropdown, Malay is there when there are hardly any models trained on that language, but Russian which is 5th on the list of number of models is not there.
And those languages that are "favorite" aren't sorted... very strange.
4. search UI 2: And to get to the language one wants which is not on the favorite list - one has to solve the puzzle:
* select "See All languages", which goes to https://huggingface.co/languages
* hit on the list of models for that language,
* then filter by the model type by typing it in and then one has arrived.
Surely, there must be an easier way to select a language filter that doesn't take 3 steps which aren't obvious at all
5. Moreover if I want to select 2 languages that aren't on the favorite list, I'm out of luck, since it's not possible with the current API. It only works for the favorite list.
And even if some of us know how to hack the URL and manually insert: https://huggingface.co/models?filter=ru,en - this is an OR operation, how do I do AND operation or I guess this is related to item 2 of this Issue - how do I filter by the to/from language.
All of these requests/questions are nice to have and none a showstopper.
Thank you!
@julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9315/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/9315/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9314 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9314/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9314/comments | https://api.github.com/repos/huggingface/transformers/issues/9314/events | https://github.com/huggingface/transformers/issues/9314 | 774,964,612 | MDU6SXNzdWU3NzQ5NjQ2MTI= | 9,314 | [model site] missing language tags for t5 models | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I updated all 5 models to include fr/ro/de language tags."
] | 1,609 | 1,616 | 1,616 | CONTRIBUTOR | null | Would it be possible to update core t5-* models' cards to include what languages they were trained on? Currently it says "en", which is very lacking.
e.g., see:
* https://huggingface.co/t5-base
* https://huggingface.co/t5-small
* etc.
The core t5 models should somehow have hits with https://huggingface.co/models?search=t5&filter=de, but they don't. Probably because they aren't tagged with the language tags. So they aren't found.
From: https://s3.amazonaws.com/models.huggingface.co/bert/t5-small-config.json
it looks like: French/German/Romanian. But also it looks to only support one direction, so probably adding the following would be clear enough to the end user:
* en_to_fr
* en_to_ge
* en_to_ro
Thank you.
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9314/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9313 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9313/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9313/comments | https://api.github.com/repos/huggingface/transformers/issues/9313/events | https://github.com/huggingface/transformers/issues/9313 | 774,948,683 | MDU6SXNzdWU3NzQ5NDg2ODM= | 9,313 | [TFBart-like models] Problem with tf saving | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This bug forced me to disable the corresponding test in the new `TFLed` model for now, see: https://github.com/huggingface/transformers/pull/9278/files#r549042909",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,609 | 1,619 | 1,619 | MEMBER | null | ## Context
Usually, encoder-decoder models require both `input_ids` and `decoder_input_ids` in order to do one forward pass. If one *e.g.* only passes the `input_ids` to TFT5 -> the model will complain:
```python
from transformers import TFT5ForConditionalGeneration
import tensorflow as tf
model = TFT5ForConditionalGeneration.from_pretrained("t5-small")
model(input_ids=tf.convert_to_tensor([10 * [2]])) # => will result in error saying `decoder_input_ids` have to be provided which is expected and correct
```
Now TFBart is a bit special in that it automatically generates the `decoder_input_ids` if they are not passed -> so that the above example would not throw an error for TFBartForConditionalGeneration.
The reason for this is this line: https://github.com/huggingface/transformers/blob/61443cd7d917ef323a799ee27bb4abc4344f0d11/src/transformers/models/bart/modeling_tf_bart.py#L1053
-> it automatically creates the `decoder_input_ids` from the `input_ids` if they are not provided. This is however more a hack than a good solution IMO. Soon we want to decouple the Bart-like models from each other and it would be good to delete this line from at least new Bart-like models. Now the problem.
## Problem:
The problem is now that if we delete these lines from Bart, then the `tf.saved_model.save(model, tmpdirname)` function does not work anymore. To reproduce:
Go into master and comment out this if statement in TFBart: https://github.com/huggingface/transformers/blob/61443cd7d917ef323a799ee27bb4abc4344f0d11/src/transformers/models/bart/modeling_tf_bart.py#L1053.
Then run the following code:
```python
from transformers import TFBartForConditionalGeneration
import tempfile
import tensorflow as tf
model = TFBartForConditionalGeneration.from_pretrained("sshleifer/bart-tiny-random")
input_ids = tf.convert_to_tensor([10 * [1]])
decoder_input_ids = tf.convert_to_tensor([10 * [8]])
inputs_dict = {"input_ids": input_ids, "decoder_input_ids": decoder_input_ids}
logits = model(inputs_dict).logits
model._saved_model_inputs_spec = None
model._set_save_spec(inputs_dict)
with tempfile.TemporaryDirectory() as tmpdirname:
tf.saved_model.save(model, tmpdirname)
model = tf.keras.models.load_model(tmpdirname)
logits_2 = model(inputs_dict)["logits"]
```
=> the code will throw an error, but it should not! It seems like there is a weird naming mismatch between `input_ids` of `TFBartDecoder` and the `decoder_input_ids` in `TFBartModel`...@jplu I'd be thrilled if you could take a look at this and see how it can be solved. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9313/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9312 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9312/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9312/comments | https://api.github.com/repos/huggingface/transformers/issues/9312/events | https://github.com/huggingface/transformers/issues/9312 | 774,943,037 | MDU6SXNzdWU3NzQ5NDMwMzc= | 9,312 | RAG model implementation seems different from the paper | {
"login": "XinyuHua",
"id": 8295434,
"node_id": "MDQ6VXNlcjgyOTU0MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8295434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XinyuHua",
"html_url": "https://github.com/XinyuHua",
"followers_url": "https://api.github.com/users/XinyuHua/followers",
"following_url": "https://api.github.com/users/XinyuHua/following{/other_user}",
"gists_url": "https://api.github.com/users/XinyuHua/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XinyuHua/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XinyuHua/subscriptions",
"organizations_url": "https://api.github.com/users/XinyuHua/orgs",
"repos_url": "https://api.github.com/users/XinyuHua/repos",
"events_url": "https://api.github.com/users/XinyuHua/events{/privacy}",
"received_events_url": "https://api.github.com/users/XinyuHua/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @XinyuHua,\r\n\r\nregarding 1) I'm not really sure either. Maybe the author can give a better answer here (hope it's ok to ping you here @ola13) For 2) Yes it should be correct. You can see from the paper that at every generation step `i` the marginal probability over all tokens `z` is calculated (compare to the equation of RAG-Token Model in 2.1). In the equation, we sum over all `z` which corresponds to summing overall `doc_logprobs` above -> so this looks correct to me. Just the fact that the `marginalize` function (your code above) is executed at every forward pass shows that this has to correspond to `RagToken`.",
"Thanks for the explanation!\r\n\r\n> the marginal probability over all tokens `z` is calculated\r\n\r\n`z` is the document instead of token, right? And the paper says \"we can draw a different latent document for each\r\ntarget token and marginalize accordingly\", so `p(z|x)` should be a different score for different `y_i`, but the code looks like `p(z|x)` is unchanged for all `y_i`s, whose log form is `doc_logprobs`. So effectively this is how `RagSequence` model is framed. Does that mean the `RagToken` model is actually trained the same way as `RagSequence`, but just the generation is different? \r\n\r\nAnother related question, I checked the pre-trained `RagConfig` for `RagToken`, and the `do_marginalize` is actually set to False, so the marginalize method is never called during forward?",
"`z` is the tensor of containing the logprob of all docs -> this is why it's called `doc_logprobs`. If you check out the dimensions of this tensor you should see that one dimension exactly corresponds to `n_docs`. \r\n\r\n`do_marginalize` is called at every forward pass because it's set to `True` in the function argument here: https://github.com/huggingface/transformers/blob/8e74eca7f2b3235f8d5340d66361ea656c67bac7/src/transformers/models/rag/modeling_rag.py#L1099",
"Hi @XinyuHua, thanks for the questions! Regarding your first point:\r\n\r\n> The marginalization for RagSequenceForGeneration seems a bit strange. From line 998 to line 1001 (link), only the second tokens in seq_logprobs are getting scored by doc_logprobs\r\n\r\nSince we're operating in the log-space, multiplications from the formulas in section 2.1. of the paper become additions. So instead of multiplying the `p(z|x) * p(y| x,z)`, we sum: `log p(z|x) + log p(y | z,x)` (or `doc_logprobs` + `seq_logprobs` using our variables names from the code), where x is the input sequence, y is the output sequence and z is the retrieved document. Note that in the logspace, `log p(y | z,x)` decomposes into the sum of logprobs of each token in y. In the part of the code you linked we perform this summation - we only want to add `doc_logprobs` once per sequence - we don't need to add it to each token of the sequence.\r\n\r\nNow the reason we add `doc_logprobs` to the second token is that we want to avoid adding it to the BOS token, in case the target sequence doesn't contain one or in case the `exclude_bos_score` argument is used - otherwise we would effectively do no marginalization at all in these cases.\r\n\r\nI hope this helps, but let us know if anything's still unclear!",
"Hi @ola13 and @patrickvonplaten , thanks for the detailed explanations! "
] | 1,609 | 1,609 | 1,609 | NONE | null | Hi folks,
Thanks for open-sourcing RAG! After reading the model description in the paper and the actual code, I noticed a few discrepancies:
1. The marginalization for `RagSequenceForGeneration` seems a bit strange. From line 998 to line 1001 ([link](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/modeling_rag.py#L998)), only the second tokens in `seq_logprobs` are getting scored by `doc_logprobs`:
```
# RAG-sequence marginalization
first_token_scores = seq_logprobs[:, :, :1, :]
second_token_scores = seq_logprobs[:, :, 1:2, :]
remainder = seq_logprobs[:, :, 2:, :]
rag_logprobs = torch.cat([first_token_scores, second_token_scores + doc_logprobs, remainder], dim=2)
```
I wonder if this is intended? I couldn't find this mentioned in the paper.
2. The marginalization for `RagTokenForGeneration` seems more like the Rag-sequence model in the paper, because the doc_scores are the same for all tokens in the same sequence. Is this correct?
```
# RAG-token marginalization
seq_logprobs = torch.nn.functional.log_softmax(seq_logits, dim=-1).view(
seq_logits.shape[0] // n_docs, n_docs, -1, seq_logits.size(-1)
)
doc_logprobs = torch.log_softmax(doc_scores, dim=1)
log_prob_sum = seq_logprobs + doc_logprobs.unsqueeze(-1).unsqueeze(-1)
return torch.logsumexp(log_prob_sum, dim=1)
```
Thanks in advance!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9312/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9311 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9311/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9311/comments | https://api.github.com/repos/huggingface/transformers/issues/9311/events | https://github.com/huggingface/transformers/issues/9311 | 774,904,406 | MDU6SXNzdWU3NzQ5MDQ0MDY= | 9,311 | T5-base goes out of memory on 4 GPUs with as small batch size as 4 | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Here are things you may try (they are unrelated to each other, so you can try in any order that resonates):\r\n\r\n1. turn off `--fp16` or keep it but switch to [pytorch-nightly](https://pytorch.org/get-started/locally/) - there was a large memory leak fixed a few weeks ago related to autocast (fp16) - if this is not related to `autocast`/ftp16 this won't help then. `--fp16` was triggering the leak. Switching to apex amp is another option to try if you're hitting this memory leak in pytorch.\r\n\r\n2. If you are using huggingface trainer (I assume `finetune_trainer.py` is from examples/seq2seq then you're good) and if you can use `transformers` master, I'd suggest using the just added `--sharded_ddp` option. In my few experiments I was able to get 2-3 times bigger batches. It's documented in this PR https://github.com/huggingface/transformers/pull/9208 (we are just waiting for a new fairscale release to merge it). But you can just use it w/o needing to understand if you are short on time. So if you want to try it, install both transformers and [fairscale](https://github.com/facebookresearch/fairscale/) from master and then that new option will be available. \r\n\r\nAnd please edit your Issue to show the command line you use, so we can see what cl args and/or hyper parameters you're using.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,609 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: LINUX
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): -
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
Trainer: @sgugger
T5: @patrickvonplaten
examples/seq2seq: @patil-suraj
## Information
Model I am using T5-base with batch size of 8 and on 4 GPUs, I am always getting out of memory even with small batch sizes, This looks like a bug as this model is not really big. I am under time pressure. Is there anyone who could help me with this bug? thanks
The tasks I am working on is:
* GLUE benchmark
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
<!-- A clear and concise description of what you would expect to happen. -->
## Error Stack
```
0%| | 0/148395 [00:00<?, ?it/s]Traceback (most recent call last):
File "finetune_trainer.py", line 303, in <module>
main()
File "finetune_trainer.py", line 239, in main
training_args.optimize_from_scratch) else None
File "/julia/codes/trainers/trainer.py", line 804, in train
self.optimizer.step()
File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File "/opt/conda/envs/t5/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/optimization.py", line 285, in step
state["exp_avg_sq"] = torch.zeros_like(p.data)
RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 2; 15.78 GiB total capacity; 14.10 GiB already allocated; 20.25 MiB free; 14.42 GiB reserved in total by PyTorch)
Traceback (most recent call last):
File "finetune_trainer.py", line 303, in <module>
main()
File "finetune_trainer.py", line 239, in main
training_args.optimize_from_scratch) else None
File "/julia/codes/trainers/trainer.py", line 804, in train
self.optimizer.step()
File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File "/opt/conda/envs/t5/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/optimization.py", line 296, in step
denom = exp_avg_sq.sqrt().add_(group["eps"])
RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 0; 15.78 GiB total capacity; 14.06 GiB already allocated; 4.25 MiB free; 14.44 GiB reserved in total by PyTorch)
Traceback (most recent call last):
File "finetune_trainer.py", line 303, in <module>
main()
File "finetune_trainer.py", line 239, in main
Traceback (most recent call last):
File "finetune_trainer.py", line 303, in <module>
training_args.optimize_from_scratch) else None
File "/julia/codes/trainers/trainer.py", line 804, in train
main()
File "finetune_trainer.py", line 239, in main
self.optimizer.step()
File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/optim/lr_scheduler.py", line 67, in wrapper
training_args.optimize_from_scratch) else Nonereturn wrapped(*args, **kwargs)
File "/julia/codes/trainers/trainer.py", line 804, in train
File "/opt/conda/envs/t5/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/optimization.py", line 296, in step
denom = exp_avg_sq.sqrt().add_(group["eps"])
RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 1; 15.78 GiB total capacity; 14.13 GiB already allocated; 10.25 MiB free; 14.46 GiB reserved in total by PyTorch)
self.optimizer.step()
File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File "/opt/conda/envs/t5/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/optimization.py", line 285, in step
state["exp_avg_sq"] = torch.zeros_like(p.data)
RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 3; 15.78 GiB total capacity; 14.10 GiB already allocated; 26.25 MiB free; 14.44 GiB reserved in total by PyTorch)
0%| | 0/148395 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/opt/conda/envs/t5/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/envs/t5/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 260, in <module>
main()
File "/opt/conda/envs/t5/lib/python3.7/site-packages/torch-1.7.1-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/envs/t5/bin/python', '-u', 'finetune_trainer.py', '--local_rank=3', 'configs/glue.json']' returned non-zero exit status 1.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9311/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9311/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9310 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9310/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9310/comments | https://api.github.com/repos/huggingface/transformers/issues/9310/events | https://github.com/huggingface/transformers/issues/9310 | 774,846,751 | MDU6SXNzdWU3NzQ4NDY3NTE= | 9,310 | ModuleNotFoundError: No module named 'tokenizations.tokenizations' | {
"login": "louisabraham",
"id": 13174805,
"node_id": "MDQ6VXNlcjEzMTc0ODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/13174805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louisabraham",
"html_url": "https://github.com/louisabraham",
"followers_url": "https://api.github.com/users/louisabraham/followers",
"following_url": "https://api.github.com/users/louisabraham/following{/other_user}",
"gists_url": "https://api.github.com/users/louisabraham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/louisabraham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louisabraham/subscriptions",
"organizations_url": "https://api.github.com/users/louisabraham/orgs",
"repos_url": "https://api.github.com/users/louisabraham/repos",
"events_url": "https://api.github.com/users/louisabraham/events{/privacy}",
"received_events_url": "https://api.github.com/users/louisabraham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry, wrong repo"
] | 1,608 | 1,608 | 1,608 | NONE | null | ```
nlp = en_trf_bertbaseuncased_lg.load()
File "/usr/lib/python3.9/site-packages/en_trf_bertbaseuncased_lg/__init__.py", line 12, in load
return load_model_from_init_py(__file__, **overrides)
File "/usr/lib/python3.9/site-packages/spacy/util.py", line 239, in load_model_from_init_py
return load_model_from_path(data_path, meta, **overrides)
File "/usr/lib/python3.9/site-packages/spacy/util.py", line 202, in load_model_from_path
cls = get_lang_class(lang)
File "/usr/lib/python3.9/site-packages/spacy/util.py", line 74, in get_lang_class
if lang in registry.languages:
File "/usr/lib/python3.9/site-packages/catalogue.py", line 56, in __contains__
has_entry_point = self.entry_points and self.get_entry_point(name)
File "/usr/lib/python3.9/site-packages/catalogue.py", line 140, in get_entry_point
return entry_point.load()
File "/usr/lib/python3.9/importlib/metadata.py", line 77, in load
module = import_module(match.group('module'))
File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/lib/python3.9/site-packages/spacy_transformers/__init__.py", line 2, in <module>
from .pipeline.tok2vec import TransformersTok2Vec # noqa
File "/usr/lib/python3.9/site-packages/spacy_transformers/pipeline/__init__.py", line 3, in <module>
from .wordpiecer import TransformersWordPiecer # noqa
File "/usr/lib/python3.9/site-packages/spacy_transformers/pipeline/wordpiecer.py", line 3, in <module>
from tokenizations import get_alignments
File "/usr/lib/python3.9/site-packages/tokenizations/__init__.py", line 2, in <module>
from .tokenizations import (
ModuleNotFoundError: No module named 'tokenizations.tokenizations'
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9310/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9309 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9309/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9309/comments | https://api.github.com/repos/huggingface/transformers/issues/9309/events | https://github.com/huggingface/transformers/issues/9309 | 774,821,526 | MDU6SXNzdWU3NzQ4MjE1MjY= | 9,309 | Entry-level demo of visual question answering | {
"login": "yezhengli-Mr9",
"id": 16505983,
"node_id": "MDQ6VXNlcjE2NTA1OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/16505983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yezhengli-Mr9",
"html_url": "https://github.com/yezhengli-Mr9",
"followers_url": "https://api.github.com/users/yezhengli-Mr9/followers",
"following_url": "https://api.github.com/users/yezhengli-Mr9/following{/other_user}",
"gists_url": "https://api.github.com/users/yezhengli-Mr9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yezhengli-Mr9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yezhengli-Mr9/subscriptions",
"organizations_url": "https://api.github.com/users/yezhengli-Mr9/orgs",
"repos_url": "https://api.github.com/users/yezhengli-Mr9/repos",
"events_url": "https://api.github.com/users/yezhengli-Mr9/events{/privacy}",
"received_events_url": "https://api.github.com/users/yezhengli-Mr9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @yezhengli-Mr9 not sure what you are asking here,\r\nby `Trainer` demo do you mean an example showing how to fine-tune `LXMERT`?\r\n\r\nIf you are looking for how to use `LXMERT` then this [demo notebook](https://colab.research.google.com/drive/18TyuMfZYlgQ_nXo-tr8LCnzUaoX0KS-h?usp=sharing) show's how to use LXMERT for visual QA ",
"> Hi @yezhengli-Mr9 not sure what you are asking here,\r\n> by `Trainer` demo do you mean an example showing how to fine-tune `LXMERT`?\r\n> \r\n> If you are looking for how to use `LXMERT` then this [demo notebook](https://colab.research.google.com/drive/18TyuMfZYlgQ_nXo-tr8LCnzUaoX0KS-h?usp=sharing) show's how to use LXMERT for visual QA\r\n\r\nHi @patil-suraj @patrickvonplaten , thanks a lot but [`examples/lxmert/`](https://github.com/huggingface/transformers/blob/master/examples/lxmert/) no longer exists although I am reconstructing some functionality since the [demo notebook](https://colab.research.google.com/drive/18TyuMfZYlgQ_nXo-tr8LCnzUaoX0KS-h?usp=sharing) seems quite instructive."
] | 1,608 | 1,609 | 1,609 | NONE | null | ## Environment info
Is there any entry-level demo of visual question answering?(I am also interested in adding title for each image later on)
Better with `Trainer` added @sgugger. I follow the example from [Lxmert](https://huggingface.co/transformers/model_doc/lxmert.html).
```python
from transformers import LxmertTokenizer, LxmertModel
import torch
tokenizer = LxmertTokenizer.from_pretrained('unc-nlp/lxmert-base-uncased')
model = LxmertModel.from_pretrained('unc-nlp/lxmert-base-uncased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
comes up
```
File "/home/yezli/miniconda3/lib/python3.8/site-packages/transformers/models/lxmert/modeling_lxmert.py", line 933, in forward
assert visual_feats is not None, "`visual_feats` cannot be `None`"
AssertionError: `visual_feats` cannot be `None`
```
- `transformers` version:
- Platform: `Ubuntu 16.04.7 LTS`
- Python version: `Python 3.7.0`
- PyTorch version (GPU?): `No. But I am using PyTorch`
- Tensorflow version (GPU?): `No`
- Using GPU in script?: `No`
- Using distributed or parallel set-up in script?: `No`
### Who can help
@airsplay @bryant1410 Trainer @sgugger
## Information
Model I am using (Bert, XLNet ...): `[Lxmert](https://huggingface.co/transformers/model_doc/lxmert.html)`
The problem arises when using:
* [v] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
* [ ]
Following example from [Lxmert](https://huggingface.co/transformers/model_doc/lxmert.html).
```python
from transformers import LxmertTokenizer, LxmertModel
import torch
tokenizer = LxmertTokenizer.from_pretrained('unc-nlp/lxmert-base-uncased')
model = LxmertModel.from_pretrained('unc-nlp/lxmert-base-uncased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
`Visual question answering` following example from [Lxmert](https://huggingface.co/transformers/model_doc/lxmert.html).
-- If you have code snippets, error messages, stack traces please provide them here as well.
```
File "/home/yezli/miniconda3/lib/python3.8/site-packages/transformers/models/lxmert/modeling_lxmert.py", line 933, in forward
assert visual_feats is not None, "`visual_feats` cannot be `None`"
AssertionError: `visual_feats` cannot be `None`
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9309/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9308 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9308/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9308/comments | https://api.github.com/repos/huggingface/transformers/issues/9308/events | https://github.com/huggingface/transformers/pull/9308 | 774,785,520 | MDExOlB1bGxSZXF1ZXN0NTQ1NjMxNjgw | 9,308 | [GPT2] Correct gradient checkpointing | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Previously, it was not possible to train GPT2 with gradient_checkpointing and `use_cache=False`. However `use_cache` should not be set to `True` when training. This PR corrects the behavior so that
gradient checkpointing does not require `use_cache=True`.
In addition, this PR changes lists to tuples in GPT2 for consistency with other models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9308/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9308",
"html_url": "https://github.com/huggingface/transformers/pull/9308",
"diff_url": "https://github.com/huggingface/transformers/pull/9308.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9308.patch",
"merged_at": 1608935292000
} |
https://api.github.com/repos/huggingface/transformers/issues/9307 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9307/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9307/comments | https://api.github.com/repos/huggingface/transformers/issues/9307/events | https://github.com/huggingface/transformers/issues/9307 | 774,754,075 | MDU6SXNzdWU3NzQ3NTQwNzU= | 9,307 | from_pretrained does not load the modified part of model | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In the documentation, this is written how from_pretrained works for untouched models, but I cannot see how this works when one modifies the model. ",
"Hey @juliahane,\r\n\r\ncould you please provide a code snippet showcasing the unintended behavior? Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): -
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Model Cards: @julien-c
-->
Model Cards: @julien-c
## Information
Hi
1) I am observing that if one modifies a model lest say T5ForConditionalGeneration and then use T5ForConditionalGeneration.from_pretrained(...) then not all components of the models are loaded, meaning that the
parts of the model the user has modified are initialized to random!
2) I observe this from accuracy, could you tell me how I can check which weights from_pretrained is loading, I am a bit lost in the repository. thanks
## Expected behavior
If the model has been changed to have more layers, .... all the trained weights need to be loaded. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9307/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9306 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9306/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9306/comments | https://api.github.com/repos/huggingface/transformers/issues/9306/events | https://github.com/huggingface/transformers/issues/9306 | 774,710,859 | MDU6SXNzdWU3NzQ3MTA4NTk= | 9,306 | comment correction in test_retrieval_rag.py? | {
"login": "zuujhyt",
"id": 75845952,
"node_id": "MDQ6VXNlcjc1ODQ1OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/75845952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zuujhyt",
"html_url": "https://github.com/zuujhyt",
"followers_url": "https://api.github.com/users/zuujhyt/followers",
"following_url": "https://api.github.com/users/zuujhyt/following{/other_user}",
"gists_url": "https://api.github.com/users/zuujhyt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zuujhyt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zuujhyt/subscriptions",
"organizations_url": "https://api.github.com/users/zuujhyt/orgs",
"repos_url": "https://api.github.com/users/zuujhyt/repos",
"events_url": "https://api.github.com/users/zuujhyt/events{/privacy}",
"received_events_url": "https://api.github.com/users/zuujhyt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [
"Both comments are right.\r\n\r\n```python\r\nself.assertEqual(doc_dicts[0][\"id\"][0], \"1\") # max inner product is reached with second doc\r\n```\r\nmakes sure the first retrieved document of the first query is the second document of the corpus (it's the one that maximizes the inner product)\r\n\r\nwhile\r\n```python\r\nself.assertEqual(doc_dicts[1][\"id\"][0], \"0\") # max inner product is reached with first doc\r\n```\r\nmakes sure that the first retrieved document of the second query is the first document of the corpus (it's the one that maximizes the inner product)\r\n\r\nTo be clearer the indices in those statements could be written as \r\n```python\r\nquery_idx = 0\r\nretrieved_document_idx = 0\r\nexpected_id = \"0\"\r\nself.assertEqual(doc_dicts[query_idx][\"id\"][retrieved_document_idx], expected_id)\r\n```",
"Thanks for your reply!"
] | 1,608 | 1,609 | 1,609 | NONE | null | HI, in
https://github.com/huggingface/transformers/blob/master/tests/test_retrieval_rag.py#L223
comments of L223 and L224 are the same, maybe one of it should be "min inner product is reached with ...:
But I am not sure which one.
Pardon me if it is already correct. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9306/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9305 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9305/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9305/comments | https://api.github.com/repos/huggingface/transformers/issues/9305/events | https://github.com/huggingface/transformers/pull/9305 | 774,701,017 | MDExOlB1bGxSZXF1ZXN0NTQ1NTY4NTIx | 9,305 | [Don't merge] New design proposition for MAPPINGS in "auto" files | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're right that the current design is sub-optimal, especially for the tokenizers since we have introduced tokenizers decoupled from models.\r\n\r\n- Having this approach would imply modifying most configuration files on the hubs, given that you use the approach\r\n\r\n ```\r\n (config.class, config.tokenizer_class) -> ...\r\n ``` \r\n as most models configurations have no tokenizer class defined.\r\n\r\n- The `isinstance` should be replaced by `type` imo, which would prevent having such a test\r\n\r\nOverall I'm definitely not against refactoring this part to ensure better compatibility, but let's try to find a way of making sure we don't have to update 1000s of configurations on the hub. Maybe adding a `tokenizer_class = XXXTokenizer` field in the configurations would prevent this. ",
"> You're right that the current design is sub-optimal, especially for the tokenizers since we have introduced tokenizers decoupled from models.\r\n> \r\n> * Having this approach would imply modifying most configuration files on the hubs, given that you use the approach\r\n> ```\r\n> (config.class, config.tokenizer_class) -> ...\r\n> ```\r\n> \r\n> \r\n> as most models configurations have no tokenizer class defined.\r\n> * The `isinstance` should be replaced by `type` imo, which would prevent having such a test\r\n> \r\n> Overall I'm definitely not against refactoring this part to ensure better compatibility, but let's try to find a way of making sure we don't have to update 1000s of configurations on the hub. Maybe adding a `tokenizer_class = XXXTokenizer` field in the configurations would prevent this.\r\n\r\nSorry, I think my explanation wasn't very clear above - I modified the description. I didn't mean to force configs to have a `tokenizer_class` attribute. The idea was just that the `TOKENIZER_MAPPING` should expose a function that allows one to get the correct tokenizer not only by the config but also by the tokenizer_class as a string. So the idea is that we could replace the current `TOKENIZER_MAPPING` with a class like as (now) shown above, but then this class can be used in whatever way is best by `AutoTokenizer`, *e.g.* the AutoTokenizer's `from_pretrained(...)` method could then call the `TOKENIZER_MAPPING` class above as follows:\r\n\r\n```python\r\nif hasattr(config, tokenizer_class):\r\n tokenizer = TOKENIZER_MAPPING[(config, config.tokenizer_class)]\r\nelse\r\n tokenizer = TOKENIZER_MAPPING[config]\r\n```\r\n ",
"I'd need to see a PoC to be sure, but this looks like an interesting idea to me. There are certainly big limitations in the way those AUTO variables are currently structured.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,618 | 1,618 | MEMBER | null | This PR would solve the issue: https://github.com/huggingface/transformers/issues/9250 but should not be used as a solution.
The PR should rather just show how the current design of all `OrderedDicts`, called `MAPPINGS_...` is suboptimal. It's impossible to add two values if both values have the same key. We need to be able to add a tokenizer class to `AutoTokenizers` even if the tokenizer does not have its own unique configuration class. We had a similar problem for Rag, since there is `RagForSequenceGeneration` and `RagForTokenGeneration` which both should be in the same mapping. IMO, the only 100% where we prevent "key" conflicts is if we use "multi-key" to "value" mappings as follows:
Tokenizer:
(PretrainedConfig (the corresponding config class, we're using now), str (the tokenizer class as a string, sometimes saved under `config.tokenizer_class`) -> TokenizerClass
Model:
(PretrainedConfig (the corresponding config class, we're using now), str (the model type as a string, sometimes saved under `config.model_type`) -> ModelClass
Some other "less" important shortcomings of this design:
- Because we often check with `isinstance` whether a config class is in an OrderedDict, we need to be very careful about the position of the key in the ordered dict and even wrote a test for this: https://github.com/huggingface/transformers/blob/21fc676645b1cae7cb9b5835435d57d90f9bc714/tests/test_modeling_auto.py#L221. This added complexity for such a simple feature is quite unnecessary IMO.
- These functions: https://github.com/huggingface/transformers/blob/21fc676645b1cae7cb9b5835435d57d90f9bc714/src/transformers/models/auto/tokenization_auto.py#L249 are more of a hack than a permanent solution IMO.
- We currently don't document those classes. I guess we could but it's just a mapping.
=> I would propose that we change all "MAPPING_FOR_..." to a class `MAPPING_FOR_` where we make sure that 100% backward compatibility is kept (except for that now it's not anymore a `OrderedDict` anymore, but a class.)
We can implement a `__getitem__` that could take inputs of different types (config for backward comp, but maybe also a "str" corresponding to the `"tokenizer_class"` or `"model_type"`). In general, it would give us more flexibility and prevent errors such as the one linked to this PR.
A possible design could look like this:
```python
class MappingGenerator:
def __init__(self, keys_to_values: List[Tuple[Union[PretrainedConfig, str, Any]]]):
self.tuple_to_class = OrderedDict(keys_to_values)
all_configs = [keys_to_value[0] for keys_to_value in keys_to_values]
self.duplicated_configs = set([x for x in all_configs if all_configs.count(x) > 1])
self.config_to_class = OrderedDict([(keys_to_value[0], keys_to_value[2]) for keys_to_value in keys_to_values])
# not possible to have key conflicts here
self.str_to_class = OrderedDict([(keys_to_value[1], keys_to_value[2]) for keys_to_value in keys_to_values])
def __getattr__(key: Union[PretrainedConfig, str, Tuple[PretrainedConfig, str]]):
if isintance(key, str):
return self.str_to_class[key]
elif isinstance(key, PretrainedConfig):
if key in self.duplicade_configs:
raise ...
return self.config_to_class[key]
elif isinstance(key, Tuple):
return self.tuple_to_class[key]
raise ...
TOKENIZER_MAPPING = MappingGenerator([
(BertConfig, "BertTokenizer", BertTokenizer),
(GPT2Config, "GPT2Tokenizer", GPT2Tokenizer),
...,
])
```
Keen to hear your thoughts on this @LysandreJik, @sgugger, @julien-c before opening a PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9305/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9305",
"html_url": "https://github.com/huggingface/transformers/pull/9305",
"diff_url": "https://github.com/huggingface/transformers/pull/9305.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9305.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9304 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9304/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9304/comments | https://api.github.com/repos/huggingface/transformers/issues/9304/events | https://github.com/huggingface/transformers/issues/9304 | 774,652,118 | MDU6SXNzdWU3NzQ2NTIxMTg= | 9,304 | 【 run_mlm.py 】attention_mask will be set to [1,1,...1] with DataCollatorForLanguageModeling | {
"login": "xieyuchen13",
"id": 13214829,
"node_id": "MDQ6VXNlcjEzMjE0ODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/13214829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xieyuchen13",
"html_url": "https://github.com/xieyuchen13",
"followers_url": "https://api.github.com/users/xieyuchen13/followers",
"following_url": "https://api.github.com/users/xieyuchen13/following{/other_user}",
"gists_url": "https://api.github.com/users/xieyuchen13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xieyuchen13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xieyuchen13/subscriptions",
"organizations_url": "https://api.github.com/users/xieyuchen13/orgs",
"repos_url": "https://api.github.com/users/xieyuchen13/repos",
"events_url": "https://api.github.com/users/xieyuchen13/events{/privacy}",
"received_events_url": "https://api.github.com/users/xieyuchen13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry, I don't really understand the question here. Could you clarify a bit? In case this is a question about the behavior of `DataCollatorForLanguageModeling`, it would be awesome if you could use the forum: https://discuss.huggingface.co/ .\r\nOtherwise it would be great if you provide a code snippet showcasing the unexpected behavior. Thanks!",
"in run_mlm.py\r\n\r\nfirst use:\r\n```python\r\ndef tokenize_function(examples):\r\n # Remove empty lines\r\n examples[\"text\"] = [line for line in examples[\"text\"] if len(line) > 0 and not line.isspace()]\r\n return tokenizer(\r\n examples[\"text\"],\r\n padding=padding,\r\n truncation=True,\r\n max_length=data_args.max_seq_length,\r\n # We use this option because DataCollatorForLanguageModeling (see below) is more efficient when it\r\n # receives the `special_tokens_mask`.\r\n return_special_tokens_mask=True,\r\n )\r\n```\r\n\r\nif i set padding=\"max_length\", the inputs will be padded. that means the inputs will already be padded to [1, 4, 5, 6, 2, 0, 0..] and attention_mask will be set to [1, 1, 1, 1, 1, 0, 0...]. so when use DataCollatorForLanguageModeling, the inputs will be padded again (tokenizer.pad). the inputs will not change but the attention_mask will be [1, 1, 1, 1, 1, 1, 1..].\r\nif i set padding=\"false\", tokenizer.pad in DataCollatorForLanguageModeling will not pad the inputs to max_length. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hey @xieyuchen13 - I don't think the `attention_mask` will change from [1,1,1,...0,0,0] to [1,1,1,....1,1,1] => could you show me an example that proves otherwise? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,622 | 1,622 | NONE | null | tokenized_datasets has been padded when padding="max_length"
when get a dataloader, we will use DataCollatorForLanguageModeling with tokenizer.pad at first
tokenizer.pad will set attention_mask to all 1 because input_ids have already been padded
so i want to know whether the attention mask meet expectations. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9304/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9303 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9303/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9303/comments | https://api.github.com/repos/huggingface/transformers/issues/9303/events | https://github.com/huggingface/transformers/pull/9303 | 774,649,111 | MDExOlB1bGxSZXF1ZXN0NTQ1NTMwNDk2 | 9,303 | add translation example | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, @vasudevgupta7 thanks for adding this! \r\nCould you link this notebook in the community notebooks table [here](https://github.com/huggingface/transformers/tree/master/notebooks#community-notebooks) instead of adding it to `/notebooks` ",
"done.",
"Thanks!\r\n\r\nI re-worded the description a bit, hope you don't mind ;)"
] | 1,608 | 1,612 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
This PR will add translation example to the repo as per discussion with @thomwolf.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@patil-suraj, @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9303/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9303",
"html_url": "https://github.com/huggingface/transformers/pull/9303",
"diff_url": "https://github.com/huggingface/transformers/pull/9303.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9303.patch",
"merged_at": 1608887870000
} |
https://api.github.com/repos/huggingface/transformers/issues/9302 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9302/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9302/comments | https://api.github.com/repos/huggingface/transformers/issues/9302/events | https://github.com/huggingface/transformers/pull/9302 | 774,547,259 | MDExOlB1bGxSZXF1ZXN0NTQ1NDUzNzQ5 | 9,302 | Fix TF TransfoXL | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes TransfoXL for graph compliancy. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9302/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9302",
"html_url": "https://github.com/huggingface/transformers/pull/9302",
"diff_url": "https://github.com/huggingface/transformers/pull/9302.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9302.patch",
"merged_at": 1609185139000
} |
https://api.github.com/repos/huggingface/transformers/issues/9301 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9301/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9301/comments | https://api.github.com/repos/huggingface/transformers/issues/9301/events | https://github.com/huggingface/transformers/pull/9301 | 774,539,218 | MDExOlB1bGxSZXF1ZXN0NTQ1NDQ3NTI4 | 9,301 | Fix TF T5 | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Already run of course 😉 and I can tell you that they all pass!"
] | 1,608 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
This PR fix a couple of bug in T5, one for graph compliancy and another one for the `past` output. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9301/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9301",
"html_url": "https://github.com/huggingface/transformers/pull/9301",
"diff_url": "https://github.com/huggingface/transformers/pull/9301.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9301.patch",
"merged_at": 1609185101000
} |
https://api.github.com/repos/huggingface/transformers/issues/9300 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9300/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9300/comments | https://api.github.com/repos/huggingface/transformers/issues/9300/events | https://github.com/huggingface/transformers/pull/9300 | 774,496,070 | MDExOlB1bGxSZXF1ZXN0NTQ1NDEzMDM0 | 9,300 | Fix TF Funnel | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik feel free to merge if it looks ok for you and if @sgugger approves the last fix on `pooled_hidden`."
] | 1,608 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes Funnel to make it full graph compliant. Even though all the slow/quick tests are passing and got similar results with few experiements, @sgugger I would appreciate that you thoroughly look at the changes in order to be sure no bugs have been introduced.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9300/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9300",
"html_url": "https://github.com/huggingface/transformers/pull/9300",
"diff_url": "https://github.com/huggingface/transformers/pull/9300.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9300.patch",
"merged_at": 1609844090000
} |
https://api.github.com/repos/huggingface/transformers/issues/9299 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9299/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9299/comments | https://api.github.com/repos/huggingface/transformers/issues/9299/events | https://github.com/huggingface/transformers/pull/9299 | 774,439,933 | MDExOlB1bGxSZXF1ZXN0NTQ1MzY0MzM0 | 9,299 | [Bart doc] Fix outdated statement | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9298 . Bart docs should be slightly updated. Thank @forest1988 !
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9299/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9299/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9299",
"html_url": "https://github.com/huggingface/transformers/pull/9299",
"diff_url": "https://github.com/huggingface/transformers/pull/9299.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9299.patch",
"merged_at": 1608817673000
} |
https://api.github.com/repos/huggingface/transformers/issues/9298 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9298/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9298/comments | https://api.github.com/repos/huggingface/transformers/issues/9298/events | https://github.com/huggingface/transformers/issues/9298 | 774,397,275 | MDU6SXNzdWU3NzQzOTcyNzU= | 9,298 | `transformers.models.bart.modeling_bart._prepare_bart_decoder_inputs` seems to be renamed but remains in the document | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @forest1988,\r\n\r\nThanks a lot for your issue - you're 100% correct, the docs need to be updated here - I'll open a PR for this and tag you.\r\nSo to give some context on why this function doesn't exist anymore.\r\n\r\n- We are trying to align the API of all models which makes it easier for users to switch from one model to the other. No other model had such a function.\r\n- The function was only called internally, so you don't really to care about the change as long as you only call the public API of Bart being at the moment the `forward pass` of `BartModel` and `BartForConditionalGeneration`. All possible behaviors of pubic API functions should have stayed 1-to-1 the same for Bart.\r\n- In the bullet point in question, it says that the model will create the `decoder_input_ids` if they are not passed. This is a very Bart-specific feature and is only really be used in two cases:\r\n 1) You want to do <mask-filling> for bart as shown in the example in this section: https://huggingface.co/transformers/model_doc/bart.html#transformers.BartForConditionalGeneration . In this case you only have to pass the `input_ids` and Bart will correctly output something. All non-Bart seq2seq model yield an error here because they expect the `decoder_input_ids` to be passed as well (which should be the default case IMO). Bart is able to do this task thanks to its rather specific pre-training objective.\r\n2) (This is the same for all Seq2Seq models) you pass `labels` and `input_ids` => this is used fro training and in this case Bart (and all other seq2seq models except EncoderDecoderModel) shift the labels to the right to create the `decoder_input_ids`.",
"So in short, the function doesn't exist anymore in the new code. If you adapted \"old\" bart code to your specific needs and are now stuck to \"port\" it to the new Bart code, I'd suggest to write an integration test using your \"old\" bart code and then closely look at what was done in #8900 to adapt your code analogously. If you're completely stuck, feel free to post an issue here and tag me - I'll help you then :-) ",
"@patrickvonplaten \r\nThank you for your quick comments and for solving the issue! I've checked PR #9299 and would like to say thank you for tagging me there.\r\n\r\nSome time ago, we tried to use various Seq2SeqLMs but had trouble using them in a unified way. The update of aligning APIs is very helpful!\r\n\r\nI read carefully your comments and linked documents, will write an integration test, and closely look #8900 to adapt my code analogously. \r\nIf I’ll be still completely stuck, I would like to take your word and ask for your help.\r\n\r\nThank you again!"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Bart: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises in the [model_doc/bart.rst](https://github.com/huggingface/transformers/blob/v4.1.1/docs/source/model_doc/bart.rst#implementation-notes)
## To reproduce
In our code using transformers v3.4.0, we have used :
```
from transformers.modeling_bart import _prepare_bart_decoder_inputs
```
I tried to rerewrite it as:
```
try: # transformers >= v4
from transformers.models.bart.modeling_bart import _prepare_bart_decoder_inputs
except ModuleNotFoundError: # transformers == v3.4.0
from transformers.modeling_bart import _prepare_bart_decoder_inputs
```
but It seems Bart (modeling_bart) in v4.1.1 doesn't have `_prepare_bart_decoder_inputs` in its implementation.
However, [model_doc/bart.rst](https://github.com/huggingface/transformers/blob/v4.1.1/docs/source/model_doc/bart.rst#implementation-notes) says
> The forward pass of :class:`~transformers.BartModel` will create decoder inputs (using the helper function :func:`transformers.models.bart.modeling_bart._prepare_bart_decoder_inputs`) if they are not passed. This is different than some other modeling APIs.
I think maybe the function is renamed in the refactoring of Bart #8900.
I welcome this refactoring as I would love to take advantage of Bart (and other Seq2SeqLMs), but I am wondering how I can fix the old code to get the best performance out of the refactored code.
Do you have any document about how to fix the old code to work well with the new version of the Bart?
## Expected behavior
Maybe model_doc/bart.rst needs to be updated.
I'm sorry if there is already an appropriate documentation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9298/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9297 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9297/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9297/comments | https://api.github.com/repos/huggingface/transformers/issues/9297/events | https://github.com/huggingface/transformers/pull/9297 | 774,366,755 | MDExOlB1bGxSZXF1ZXN0NTQ1Mjk0NjEz | 9,297 | fix typo in modeling_encoder_decoder.py | {
"login": "daniele-sartiano",
"id": 1573433,
"node_id": "MDQ6VXNlcjE1NzM0MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1573433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daniele-sartiano",
"html_url": "https://github.com/daniele-sartiano",
"followers_url": "https://api.github.com/users/daniele-sartiano/followers",
"following_url": "https://api.github.com/users/daniele-sartiano/following{/other_user}",
"gists_url": "https://api.github.com/users/daniele-sartiano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daniele-sartiano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daniele-sartiano/subscriptions",
"organizations_url": "https://api.github.com/users/daniele-sartiano/orgs",
"repos_url": "https://api.github.com/users/daniele-sartiano/repos",
"events_url": "https://api.github.com/users/daniele-sartiano/events{/privacy}",
"received_events_url": "https://api.github.com/users/daniele-sartiano/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | Fixed typo.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a typo
## Before submitting
- [ x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9297/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9297",
"html_url": "https://github.com/huggingface/transformers/pull/9297",
"diff_url": "https://github.com/huggingface/transformers/pull/9297.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9297.patch",
"merged_at": 1608817089000
} |
https://api.github.com/repos/huggingface/transformers/issues/9296 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9296/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9296/comments | https://api.github.com/repos/huggingface/transformers/issues/9296/events | https://github.com/huggingface/transformers/pull/9296 | 774,355,907 | MDExOlB1bGxSZXF1ZXN0NTQ1Mjg0NjM1 | 9,296 | [bert_generation] enable cache by default | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
- add `use_cache` to `BertGenerationConfig` with default to `True`
- in `BertGenerationEncoder` if `use_cache` is `None` (this is the default behaviour) then set it using the `config`.
This will enable caching by default in inference for `BertGenerationEncoder` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9296/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9296",
"html_url": "https://github.com/huggingface/transformers/pull/9296",
"diff_url": "https://github.com/huggingface/transformers/pull/9296.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9296.patch",
"merged_at": 1608812256000
} |
https://api.github.com/repos/huggingface/transformers/issues/9295 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9295/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9295/comments | https://api.github.com/repos/huggingface/transformers/issues/9295/events | https://github.com/huggingface/transformers/issues/9295 | 774,317,340 | MDU6SXNzdWU3NzQzMTczNDA= | 9,295 | Good Second Issue: T5 FP16 in Pytorch | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
" here's what I found\r\n\r\n`t5-small` is the only T5 model that works in fp16 at the moment. The rest of the models produce `nan` loss/logits.\r\n\r\n for all the models and versions (v1, v1.1, mT5) at some point we get `inf` values in `hidden_states` after applying the final linear layer (`wo`) in `T5DenseReluDense` and `T5DenseGatedGeluDense`.\r\nhttps://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/t5/modeling_t5.py#L248-L278\r\n\r\nwhich results in `nan` values in `T5LayerNorm`.\r\n\r\nAlso for `t5-large`, `t5-v1_1-base`, `t5-v1_1-large`, there are `inf` values in the output of `T5LayerSelfAttention` and `T5LayerCrossAttention`, specifically where we add the attn output with the `hidden_states`\r\n\r\nhttps://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/t5/modeling_t5.py#L548\r\n\r\nhttps://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/t5/modeling_t5.py#L584\r\n\r\nThis happens during both training and inference, to reproduce \r\n\r\n```python\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"t5-base\").to(\"cuda:0\").eval()\r\nmodel.half()\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"t5-base\")\r\n\r\nARTICLE = \"\"\"summarize: Marseille, France (CNN)The French prosecutor leading an investigation into the crash of Germanwings Flight 9525 insisted Wednesday that he was not aware of any video footage from on board the plane. Marseille prosecutor Brice Robin told CNN that \"so far no videos were used in the crash investigation.\" He added, \"A person who has such a video needs to immediately give it to the investigators.\" Robin's comments follow claims by two magazines, German daily Bild and French Paris Match, of a cell phone video showing the harrowing final seconds from on board Germanwings Flight 9525 as it crashed into the French Alps. All 150 on board were killed. Paris Match and Bild reported that the video was recovered from a phone at the wreckage site. The two publications described the supposed video, but did not post it on their websites. The publications said that they watched the video, which was found by a source close to the investigation. \"One can hear cries of 'My God' in several languages,\" Paris Match reported. \"Metallic banging can also be heard more than three times, perhaps of the pilot trying to open the cockpit door with a heavy object. Towards the end, after a heavy shake, stronger than the others, the screaming intensifies. Then nothing.\" \"It is a very disturbing scene,\" said Julian Reichelt, editor-in-chief of Bild online. An official with France's accident investigation agency, the BEA, said the agency is not aware of any such video. Lt. Col. Jean-Marc Menichini, a French Gendarmerie spokesman in charge of communications on rescue efforts around the Germanwings crash site, told CNN that the reports were \"completely wrong\" and \"unwarranted.\" Cell phones have been collected at the site, he said, but that they \"hadn't been exploited yet.\" Menichini said he believed the cell phones would need to be sent to the Criminal Research Institute in Rosny sous-Bois, near Paris, in order to be analyzed by specialized technicians working hand-in-hand with investigators. But none of the cell phones found so far have been sent to the institute, Menichini said. Asked whether staff involved in the search could have leaked a memory card to the media, Menichini answered with a categorical \"no.\" Reichelt told \"Erin Burnett: Outfront\" that he had watched the video and stood by the report, saying Bild and Paris Match are \"very confident\" that the clip is real. He noted that investigators only revealed they'd recovered cell phones from the crash site after Bild and Paris Match published their reports. \"That is something we did not know before. ... Overall we can say many things of the investigation weren't revealed by the investigation at the beginning,\" he said. What was mental state of Germanwings co-pilot? German airline Lufthansa confirmed Tuesday that co-pilot Andreas Lubitz had battled depression years before he took the controls of Germanwings Flight 9525, which he's accused of deliberately crashing last week in the French Alps. Lubitz told his Lufthansa flight training school in 2009 that he had a \"previous episode of severe depression,\" the airline said Tuesday. Email correspondence between Lubitz and the school discovered in an internal investigation, Lufthansa said, included medical documents he submitted in connection with resuming his flight training. The announcement indicates that Lufthansa, the parent company of Germanwings, knew of Lubitz's battle with depression, allowed him to continue training and ultimately put him in the cockpit. Lufthansa, whose CEO Carsten Spohr previously said Lubitz was 100% fit to fly, described its statement Tuesday as a \"swift and seamless clarification\" and said it was sharing the information and documents -- including training and medical records -- with public prosecutors. Spohr traveled to the crash site Wednesday, where recovery teams have been working for the past week to recover human remains and plane debris scattered across a steep mountainside. He saw the crisis center set up in Seyne-les-Alpes, laid a wreath in the village of Le Vernet, closer to the crash site, where grieving families have left flowers at a simple stone memorial. Menichini told CNN late Tuesday that no visible human remains were left at the site but recovery teams would keep searching. French President Francois Hollande, speaking Tuesday, said that it should be possible to identify all the victims using DNA analysis by the end of the week, sooner than authorities had previously suggested. In the meantime, the recovery of the victims' personal belongings will start Wednesday, Menichini said. Among those personal belongings could be more cell phones belonging to the 144 passengers and six crew on board. Check out the latest from our correspondents . The details about Lubitz's correspondence with the flight school during his training were among several developments as investigators continued to delve into what caused the crash and Lubitz's possible motive for downing the jet. A Lufthansa spokesperson told CNN on Tuesday that Lubitz had a valid medical certificate, had passed all his examinations and \"held all the licenses required.\" Earlier, a spokesman for the prosecutor's office in Dusseldorf, Christoph Kumpa, said medical records reveal Lubitz suffered from suicidal tendencies at some point before his aviation career and underwent psychotherapy before he got his pilot's license. Kumpa emphasized there's no evidence suggesting Lubitz was suicidal or acting aggressively before the crash. Investigators are looking into whether Lubitz feared his medical condition would cause him to lose his pilot's license, a European government official briefed on the investigation told CNN on Tuesday. While flying was \"a big part of his life,\" the source said, it's only one theory being considered. Another source, a law enforcement official briefed on the investigation, also told CNN that authorities believe the primary motive for Lubitz to bring down the plane was that he feared he would not be allowed to fly because of his medical problems. Lubitz's girlfriend told investigators he had seen an eye doctor and a neuropsychologist, both of whom deemed him unfit to work recently and concluded he had psychological issues, the European government official said. But no matter what details emerge about his previous mental health struggles, there's more to the story, said Brian Russell, a forensic psychologist. \"Psychology can explain why somebody would turn rage inward on themselves about the fact that maybe they weren't going to keep doing their job and they're upset about that and so they're suicidal,\" he said. \"But there is no mental illness that explains why somebody then feels entitled to also take that rage and turn it outward on 149 other people who had nothing to do with the person's problems.\" Germanwings crash compensation: What we know. Who was the captain of Germanwings Flight 9525? CNN's Margot Haddad reported from Marseille and Pamela Brown from Dusseldorf, while Laura Smith-Spark wrote from London. CNN's Frederik Pleitgen, Pamela Boykoff, Antonia Mortensen, Sandrine Amiel, and Anna-Maja Rappard contributed to this report.\"\"\"\r\n\r\ninputs = tokenizer(ARTICLE, max_length=512, truncation=True, return_tensors=\"pt\").to(\"cuda:0\")\r\nout = model(**inputs, decoder_input_ids=torch.tensor([[tokenizer.pad_token_id]]).to(\"cuda:0\"))\r\ntorch.isnan(out.logits).any()\r\n# => True\r\n```\r\n\r\n## Proposed fix\r\n\r\nTo avoid `inf` values we could clamp the `hidden_states` to the max values for the current data type if there are `inf` in it. i.e\r\n```python\r\nif torch.isinf(hidden_states).any():\r\n clamp_value = torch.finfo(hidden_states.dtype).max - 1000\r\n hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)\r\n```\r\n\r\nwe need to add this after self attn, cross-attn, and the feed-forward layer which is where the `inf` values occur. This works for both `apex` and `amp`\r\n\r\nTo verify this fix, I trained `t5-base`, `t5-v1_1-base` and `t5-v1_1-small` on `cnn/dm` for 10k steps (1.11 epochs)\r\nHere's the training command, to run this clone [this fork](https://github.com/patil-suraj/transformers) and check out the `fix-t5-fp16` branch. navigate to `examples/seq2seq` dir, follow the instructions in the readme to download `cnn_dm` and dataset, and then run the following command\r\n\r\n```bash\r\nexport M=google/t5-v1_1-base\r\nexport OUT_DIR=t5-v1_1-base-cnn-fp16\r\nexport DATA_DIR=cnn_dm\r\n\r\npython finetune_trainer.py \\\r\n --model_name_or_path $M \\\r\n --data_dir $DATA_DIR \\\r\n --output_dir $OUT_DIR --overwrite_output_dir \\\r\n --max_steps=10000 \\\r\n --gradient_accumulation_steps=8 \\\r\n --learning_rate=1e-4 \\\r\n --per_device_train_batch_size=4 \\\r\n --n_val 500 \\\r\n --max_target_length=56 --val_max_target_length=128 \\\r\n --fp16 --fp16_backend apex \\\r\n --do_train --do_eval --evaluation_strategy steps \\\r\n --logging_steps=100 --logging_first_step --eval_steps=2500 --save_steps=2500 --save_total_limit=2 \\\r\n --sortish_sampler \\\r\n```\r\n\r\nfor evaluation \r\n```bash\r\npython run_eval.py \\\r\n t5-v1_1-base-cnn-fp16 cnn_dm/test.source hypothesis.txt \\\r\n --reference_path cnn_dm/test.target \\\r\n --score_path metrics.json \\\r\n --device cuda:0 \\\r\n --prefix summarize: \\\r\n --bs 16 \\\r\n --fp16 \\\r\n```\r\n\r\nand got the following metrics (ROUGE2)\r\n1. for `t5-base`: 19.2804\r\n2. for `t5-v1.1-base`: 18.4316\r\n(note that the score for `t5-base` is more because it's already pre-trained on `cnn/dm`)\r\n\r\nTo compare this, evaluated the pre-trained `t5-base` in both `fp32` and `fp16`, which gave the following results\r\n1. `fp16`: 18.3681\r\n2. `fp32`: 18.394\r\n\r\nSo the results are close enough.\r\n\r\nTo verify the fix for `t5-large`, I evaluated the pre-trained `t5-large` in `fp32` and `fp16` (use the same command above to evaluate `t5-large`) and got the following results\r\n1. `fp16`: 19.2734\r\n2. `fp32`: 19.2342\r\n\r\nSurprisingly, rouge2 is slightly better in `fp16`.\r\n\r\nSo with the above fix, the following model types now work in `fp16` (opt level `01`), and give descent speed-up :)\r\n- **T5v1**: `t5-small`, `t5-base`, `t5-large`\r\n- **T5v1_1**: `google/t5-v1_1-small`, `google/t5-v1_1-base`\r\n- **MT5**: `google/mt5-small`, `google/mt5-base`\r\n\r\n`google/t5-v1_1-large` and `google/mt5-large` should also work, will confirm after running few experiments.\r\n\r\n\r\nOne interesting observation,\r\nFor inference, the `t5-base` fine-tuned with `fp16` and evaluated in `fp32` is faster than pre-trained `t5-base` evaluated in `fp16`. See this [colab](https://colab.research.google.com/drive/1UaMBsWp3e1Qf-fYKxXmtulsvPXViKa72?usp=sharing) \r\n\r\n\r\n**Update**:\r\n`google/t5-v1_1-large` still gives `nan` loss after about 200 steps\r\n",
"Great work! We should also share those results on the forum: https://discuss.huggingface.co/ :-) ",
"Hi @exelents\r\n\r\nTo answer your question,\r\nas mentioned above these changes will enable fp16 for all small and base version with `apex` `01` and native `amp`. \r\nFor large models, I only tested it for inference, and it works. Right now I'm training large models and will report the results here.\r\n\r\nDeepSpeed handles it's own fp16 and I don't know all the details about it, so won't be able to help there at the moment. @stas00 might have some ideas as he's working with deepspeed.\r\n\r\nTo sum up, this fix works with `apex 01` and `native amp` with `Seq2SeqTrainer` for training and with `.half` for inference.",
"> DeepSpeed handles it's own fp16 and I don't know all the details about it, so won't be able to help there at the moment. @stas00 might have some ideas as he's working with deepspeed.\r\n\r\nI would like the DeepSpeed integration to be merged and then anybody can start experimenting and seeing what else might be needed to be tweaked. To start with I've been primarily focusing on training/eval just working. The next stage would be using and tuning up.",
"Hi @patil-suraj \r\nIt seems like huggingface still hasn't repaired the FP16 problem in MT5-large or MT5-xl, do you or anynoe else have any plans on it?\r\n",
"Hey @mxa4646,\r\n\r\nT5 was never made to be fully compatible with FP16, it was trained using bfloat16, which has a different range than PyTorch's fp16. There is a good chance though that training T5 with deepspeed and fp16 will work!",
"Hi \r\nI am training mt5-small with deepspeed, with fp16, and I am getting always nan, so far could not managed to make it work, do you mind to share how you set parameters to make it work? I am having a hard time with this and kindly appreciate your help @patrickvonplaten ",
"> T5 was never made to be fully compatible with FP16, it was trained using bfloat16,\r\n\r\nThank you for this insight, @patrickvonplaten - I didn't know that!\r\n\r\nI was reading up on bfloat16 for a related issue https://github.com/huggingface/transformers/issues/10816 and it looks like the main issue is that whenever one does an aggregate operation on big numbers in bfloat16 or fp16 - the accumulate needs to be in fp32. So for example the fix applied here: https://github.com/huggingface/transformers/pull/10815 - so perhaps it is possible to identify such operations and change them to `some_torch_operator(..., , dtype=torch.float32)` so most of the math will still be fp16, but there will be no overflow. And it won't impact the normal fp32 logic, as it'd already be of the same type. And this operation doesn't take much extra memory (other than doubling of the resulting variable size).\r\n\r\nBut here it sounds like the problem is different and it's that bfloat16 may not convert to the same value in fp16. I wonder if someone tried to convert the weights and compare the difference. \r\n\r\nPerhaps it's enough to take the models and finetune them on the same data but in mixed precision and perhaps it'd rectify its level of precision.\r\n\r\n",
"I tried the T5-large in fp16 and it is slower which is really strange. For everything else the same for the same test data i get 5.62 sec with Fp32 and 6.95 sec for Fp16. However fp16 uses almost 50% less memory. ",
"Has this model been implemented for PyTorch yet?"
] | 1,608 | 1,665 | null | MEMBER | null | # 🚀 Feature request
This "Good second issue" should revisit some of the problems we were having with FP16 for `T5ForConditionalGeneration`: https://github.com/huggingface/transformers/issues/4586 and help to make T5 compatible with fp16.
**_Requirements:_**
- use transformers master
- use newest pytorch version
- have access to GPU
**_Context:_**
To better explain the context, let's define the three different pre-trained T5 models types we have:
- **T5v1** (original T5): => this corresponds to all those checkpoints: `t5-small`, `t5-base`, `t5-large`, `t5-3b`, `t5-11b`
- **T5v1_1** (improved T5): => this corresponds to all those checkpoints: `google/t5-v1_1-small`, `google/t5-v1_1-base`, `google/t5-v1_1-large`, `google/t5-v1_1-xl`, `google/t5-v1_1-xxl`. **T5v1_1** has a slightly different architecture than **T5v1**. More info on differences can be found here: https://github.com/huggingface/transformers/issues/6285
- **MT5** (multi-lingual T5): => this model is identical in architecture to **T5v1_1** but has different pre-trained weights and a much larger word embedding matrix.
As shown in this issue https://github.com/huggingface/transformers/issues/4586 , training **T5v1** in fp16 mode led in the past to numerical overflow in the `T5LayerFF` forward pass: https://github.com/huggingface/transformers/blob/6189ae99603bd5dc14c5631f1b4562f78e24d575/src/transformers/models/t5/modeling_t5.py#L279.
At the time of this issue: https://github.com/huggingface/transformers/issues/4586, **T5v1** was added with a small bug that led to slightly wrong outputs that was only fixed by this PR: https://github.com/huggingface/transformers/pull/8518.
Also, now there are new T5 checkpoints, notably the **T5v1_1** and **MT5** checkpoints, where it would be very interesting to see whether fp16 can work with those.
**_Feature Request_**
So for this feature request, we should two scenarios:
1) Inference:
For each T5 model type we should test when the models break during inference. This can be as easy as testing the following script for a bunch of different checkpoints on different `input_str`:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
checkpoint = "t5-small" # "google/mt5-small", "google/t5-v1_1-small"
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).to('cuda')
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
input_str = "Hello there. This is the input." # here it would be better to test much larger inputs
input_ids = tokenizer(input_str, return_tensors="pt").input_ids.to('cuda')
# FP32
output_fp32 = model.generate(input_ids)
# FP16
model.half()
output_fp16 = model.generate(input_ids)
if output_fp32.tolist() == output_fp16.tolist():
print("SUCCESS: Output is equal!")
else:
print("Output is different!")
print("FP32", output_fp32)
print("FP16", output_fp16)
```
2) Training (the more interesting part):
This is probably more important and will require more time / skill. In order to check how T5 does in FP16 training, I'd recommend to use the newly added `Seq2SeqTrainer`: https://github.com/huggingface/transformers/blob/6189ae99603bd5dc14c5631f1b4562f78e24d575/src/transformers/trainer_seq2seq.py#L38. I would recommend to train on a summarization task, such as CNN/Dailymail. One could closely follow, this notebook: https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing, but replacing Bert2Bert with the different T5 models. Ideally different "fp16 backends" should be tested: https://github.com/huggingface/transformers/blob/6189ae99603bd5dc14c5631f1b4562f78e24d575/src/transformers/training_args.py#L216 and one should try to see whether hacks as proposed in https://github.com/huggingface/transformers/issues/4586#issuecomment-748336815 can solve the problem. It would be very interesting to see whether the error happens only for **T5v1** or also for **T5v1_1** and **MT5** and it what point. For each type it would be great to test for "small", "base" and if possible even "large". Ideally, one should first create a short summarization fine-tuning script (happy to help here) and then run a bunch of different experiments with different fp16 backends and different models.
**_Possible Outcome_**
The results of those experiments should be documented here or even better on https://discuss.huggingface.co/. Ideally, a solution to the problem is found and one could publish a nice blog post explaining how to effectively train T5.
## Motivation
T5 is one of the most widely used models of Transformers at the moment so that more results to this issue would be extremely useful for the community. In addition, this issue can be a great opportunity to learn more about the limits of fp16 and why some models still do require full fp32 support (or at least until bfloat16 is better supported in torch). This is not an easy issue to tackle, but an extremely important one.
## Your contribution
I'm happy to help along the way, starting with making a nice T5 summarization training pipeline that lets one easily test on different models, and fp16 backends.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9295/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9295/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9294 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9294/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9294/comments | https://api.github.com/repos/huggingface/transformers/issues/9294/events | https://github.com/huggingface/transformers/pull/9294 | 774,314,021 | MDExOlB1bGxSZXF1ZXN0NTQ1MjQ5MjYz | 9,294 | Fix TF input for np.ndarray | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I commented on the corresponding issue, I don't fully understand what's going on there in the error",
"This is on purpose because all the methods of a Keras Model allow to have `np.ndarray` as input. You can check for example `fit`, `predict` or `evaluate` here https://www.tensorflow.org/api_docs/python/tf/keras/Model. They all take a numpy array as possible input",
"Okey, I see. I still don't think we should provide this feature, just because keras has some automatic conversion internally. Is there a use case where one cannot forward a TF tensor and has to forward a `nd.array`? The general philosophy of the lib is to \"not add too many magic functions\" and allowing `nd.arrays` as inputs for TF seems like opening the door for lots of future issues to me. Let's see what @sgugger @LysandreJik think about it",
"On my side I would prefer to keep as much compliancy as possible with TF. But if everyone are not confident because of this Keras magic, I'm ok to do not provide it :)",
"The TF models do not accept numpy arrays inputs, so this would allow bad inputs to be passed to TF models. I think we should stick with inputs accepted by TF models only.",
"The TF models allow to have numpy arrays as inputs (if we had the `np.ndarray` type among the allowed ones). As example:\r\n```python\r\nfrom transformers import TFBertForSequenceClassification, BertTokenizer\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\ninputs = tokenizer(\"Hello\", return_tensors=\"np\")\r\nmodel = TFBertForSequenceClassification.from_pretrained(\"bert-base-cased\")\r\nmodel(inputs)\r\n```\r\nGives:\r\n```\r\nTFSequenceClassifierOutput(loss=None, logits=<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[0.08065815, 0.58226204]], dtype=float32)>, hidden_states=None, attentions=None)\r\n```\r\n\r\nKeras layers/models are by default compliant with numpy arrays.",
"Ah my bad, I tried but forgot to checkout the PR before :facepalm:\r\nIf TF models do accept those inputs then, I have no strong objection.",
"@LysandreJik you mean adding a test in `test_modeling_tf_common` with numpy array as input for each model?",
"Yes, for example !",
"There is a now a new test to be sure that the models can be properly executed with numpy inputs.",
"Does it looks ok to be merged?"
] | 1,608 | 1,610 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
This PR allows `np.ndarray` datatype as input of the models.
# Fixes
#9248
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9294/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9294",
"html_url": "https://github.com/huggingface/transformers/pull/9294",
"diff_url": "https://github.com/huggingface/transformers/pull/9294.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9294.patch",
"merged_at": 1610112210000
} |
https://api.github.com/repos/huggingface/transformers/issues/9293 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9293/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9293/comments | https://api.github.com/repos/huggingface/transformers/issues/9293/events | https://github.com/huggingface/transformers/pull/9293 | 774,294,390 | MDExOlB1bGxSZXF1ZXN0NTQ1MjMyOTE4 | 9,293 | Update tokenization_utils_base.py | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | Missing "s" typo in the error message, which is an invalid argument. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9293/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9293",
"html_url": "https://github.com/huggingface/transformers/pull/9293",
"diff_url": "https://github.com/huggingface/transformers/pull/9293.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9293.patch",
"merged_at": 1608817395000
} |
https://api.github.com/repos/huggingface/transformers/issues/9292 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9292/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9292/comments | https://api.github.com/repos/huggingface/transformers/issues/9292/events | https://github.com/huggingface/transformers/pull/9292 | 774,293,440 | MDExOlB1bGxSZXF1ZXN0NTQ1MjMyMTQx | 9,292 | Fix TF Flaubert | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes Flaubert to make able to be executed in graph mode.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9292/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9292",
"html_url": "https://github.com/huggingface/transformers/pull/9292",
"diff_url": "https://github.com/huggingface/transformers/pull/9292.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9292.patch",
"merged_at": 1609772789000
} |
https://api.github.com/repos/huggingface/transformers/issues/9291 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9291/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9291/comments | https://api.github.com/repos/huggingface/transformers/issues/9291/events | https://github.com/huggingface/transformers/pull/9291 | 774,291,103 | MDExOlB1bGxSZXF1ZXN0NTQ1MjMwMjAz | 9,291 | Fix TF CTRL | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the model inputs in TF CTRL. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9291/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9291",
"html_url": "https://github.com/huggingface/transformers/pull/9291",
"diff_url": "https://github.com/huggingface/transformers/pull/9291.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9291.patch",
"merged_at": 1609772211000
} |
https://api.github.com/repos/huggingface/transformers/issues/9290 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9290/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9290/comments | https://api.github.com/repos/huggingface/transformers/issues/9290/events | https://github.com/huggingface/transformers/issues/9290 | 774,280,457 | MDU6SXNzdWU3NzQyODA0NTc= | 9,290 | Problem converting slow tokenizer to fast: token out of vocabulary | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1920687293,
"node_id": "MDU6TGFiZWwxOTIwNjg3Mjkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Fast%20Tokenizers",
"name": "Fast Tokenizers",
"color": "b60205",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"That looks like a tough one...we might need help from @n1t0 here",
"The files for this model can be found [here](https://huggingface.co/pdelobelle/robbert-v2-dutch-base/tree/main).\r\n\r\nIn `merges.txt` is the merge rule: `ĉ Ċ`\r\nThat means that the symbol `Ċ` should be present in `vocab.json`\r\nIt is not.\r\nSo the error does kind of make sense.\r\n\r\n @iPieter are these the correct files?\r\nIn the paper you mention:\r\n> We limited the vocabulary to 40k words,\r\n> which is 10k words less than RobBERT v1, due to\r\n> additional tokens including non-negligible num-\r\n> ber of Unicode tokens that are not used in Dutch.\r\n\r\nThere are 39982 words in the vocabulary. \r\nIs it possible that some of the symbols/tokens are missing?\r\n\r\n\r\n",
"@schelv Thanks for looking into the issue. Those files should be correct, there are indeed 39982 tokens.\r\n\r\nI am using the same files internally without any issues on the old tokenizer (i.e. correct behaviour and sensible predictions), for which the files were specifically created. I also looked into this error before, but cannot find an easy fix. There are a few other merge rules that are also conflicting. \r\n\r\nMy suspicion is that this stems from the translation of the tokenizer's files from Fairseq to HF. This was done a year ago, so the details might be a bit fuzzy. The problem was that the Fairseq library, where we trained RobBERT, used the vocab and merges file, but also generated an additional file (`dict.txt`) that was used to count the number of occurrences for each token (which is ok, not the issue) and also orders the tokens. \r\n\r\nThese new, ordered positions were then used in Fairseq, while HF uses the id's from the vocab.json file. So this means there is an additional dictionary lookup. To fix this behaviour in HF transformers, I created a script to merge the vocab.sjon with the dict.txt. Otherwise, the token id's from unrelated tokens would be used for the embedding layer and the MLM task, giving a garbage output. \r\n\r\nI will investigate this translation step again, but I'm confused by the fact that the behaviour is correct with the slow tokenizer.\r\n\r\n",
"Thanks for the quick answer!\r\n\r\nJust checking if I understand you correctly:\r\nFairseq uses token id's that are based on the token occurrence count. This information is stored in dict.txt\r\nThe token id's of the original vocab.json were updated with the information from dict.txt? did this create the current vocab.json that is loaded by the transformers library?\r\n\r\nIf you upload the original files somewhere I can also take a look at them.🙂",
"Yes, that's exactly right. The conversion script is [here](https://github.com/iPieter/RobBERT/blob/340bd9d87ef362462fccf8f44e7740c7dfd1d865/src/convert_roberta_dict.py) and [this is the unit test](https://github.com/iPieter/RobBERT/blob/340bd9d87ef362462fccf8f44e7740c7dfd1d865/tests/test_convert_roberta_dict.py) that missed this case. The original files are also downloadable from our github release [here](https://github.com/iPieter/RobBERT/releases/tag/v2.0), but you have to download the entire model. \r\n\r\nHowever, when writing the previous response, I had an insight in what the issue might be. So'll try to debug and report the results here.\r\n\r\n_Long report ahead, TLDR: it now works_ 😃 \r\n\r\nThere might be tokens in the `vocab.json` that was generated by the HuggingFace tokenizer library that are not found by the Fairseq tokenizer, thus they don't occur in the `dict.txt`. After some debugging, I found the these tokens (the special tokens are handled later): \r\n\r\n```python\r\n00:'Á'\r\n01:'÷'\r\n02:'À'\r\n03:'þ'\r\n04:'ø'\r\n05:'ÿ'\r\n06:'ú'\r\n07:'ö'\r\n08:'ĉĊ'\r\n09:'</s>'\r\n10:'č'\r\n11:'ĠTheCompany'\r\n12:'û'\r\n13:'ü'\r\n14:'<unk>'\r\n15:'ý'\r\n16:'õ'\r\n17:'<pad>'\r\n18:'Ċ'\r\n19:'<s>'\r\n20:'<mask>'\r\n21:'ù'\r\n```\r\n\r\nSo these tokens are what is causing the fast tokenizer to complain, since they appear in the `vocab.json` set and not in the `dict.txt` set. Ignoring the special tokens (`<unk>`, `<s>`, `</s>` and `<pad>`), this brings the latest vocab id to 39996, not yet 40k. So there is a second bug in my conversion script. \r\n\r\nThe second bug has to do with the fact that Fairseq adds 2 custom tokens by default that I didn't remove. That's not a big deal, but they do affect the vocab length, so let's be totally correct and add those two tokens and the mask token (since `robbert-v2-dutch-base` is and MLM model) as well:\r\n\r\n<img width=\"1019\" alt=\"image\" src=\"https://user-images.githubusercontent.com/6965756/109134694-9a9ff880-7756-11eb-9734-f678d6dc8845.png\">\r\n\r\nTime for a sanity check:\r\n```python\r\n[{'sequence': 'Er staat een boom in mijn tuin.', 'score': 0.16917602717876434, 'token': 2600, 'token_str': ' boom'},\r\n {'sequence': 'Er staat een bankje in mijn tuin.', 'score': 0.08176644891500473, 'token': 21620, 'token_str': ' bankje'}, \r\n{'sequence': 'Er staat een schutting in mijn tuin.', 'score': 0.0384209081530571, 'token': 15000, 'token_str': ' schutting'}, \r\n{'sequence': 'Er staat een vijver in mijn tuin.', 'score': 0.038086555898189545, 'token': 8217, 'token_str': ' vijver'}, \r\n{'sequence': 'Er staat een plant in mijn tuin.', 'score': 0.03249552100896835, 'token': 2721, 'token_str': ' plant'}]\r\n````\r\n\r\nThe vocab.json in `robbert-v2-dutch-base` is updated, so this issue can be closed.\r\n\r\n",
"Great, thanks a lot for the investigation @iPieter! Now I can use the fast tokenizers in all their glory.",
"Thanks from me as well! Looking forward to using it! "
] | 1,608 | 1,614 | 1,614 | COLLABORATOR | null | When I try to use a [Dutch RoBERTa model](https://huggingface.co/pdelobelle/robbert-v2-dutch-base/tree/main#how-to-use) as suggested, the library tries to convert the old (slow) tokenizer to the fast one. However, this leads to issues. (I can just keep the slow one, but I need to use the offset and word_ids functionality which is only available in the fast tokenizers.)
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("pdelobelle/robbert-v2-dutch-base")
```
Error trace:
```
File "C:/dev/python/jasper-tok2vec/main.py", line 10, in main
tokenizer = AutoTokenizer.from_pretrained("pdelobelle/robbert-v2-dutch-base")
File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 378, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\tokenization_utils_base.py", line 1804, in from_pretrained
return cls._from_pretrained(
File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\tokenization_utils_base.py", line 1877, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\models\roberta\tokenization_roberta_fast.py", line 160, in __init__
super().__init__(
File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\models\gpt2\tokenization_gpt2_fast.py", line 133, in __init__
super().__init__(
File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\tokenization_utils_fast.py", line 89, in __init__
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\convert_slow_tokenizer.py", line 642, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\convert_slow_tokenizer.py", line 262, in converted
BPE(
Exception: Error while initializing BPE: Token `Ċ` out of vocabulary
```
### Environment
- `transformers` version: 4.1.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.2
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
### Who can help
@mfuntowicz | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9290/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9290/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9289 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9289/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9289/comments | https://api.github.com/repos/huggingface/transformers/issues/9289/events | https://github.com/huggingface/transformers/pull/9289 | 774,219,784 | MDExOlB1bGxSZXF1ZXN0NTQ1MTcwMzAw | 9,289 | Fix typo in file_utils.py | {
"login": "jungwhank",
"id": 53588015,
"node_id": "MDQ6VXNlcjUzNTg4MDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/53588015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungwhank",
"html_url": "https://github.com/jungwhank",
"followers_url": "https://api.github.com/users/jungwhank/followers",
"following_url": "https://api.github.com/users/jungwhank/following{/other_user}",
"gists_url": "https://api.github.com/users/jungwhank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jungwhank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungwhank/subscriptions",
"organizations_url": "https://api.github.com/users/jungwhank/orgs",
"repos_url": "https://api.github.com/users/jungwhank/repos",
"events_url": "https://api.github.com/users/jungwhank/events{/privacy}",
"received_events_url": "https://api.github.com/users/jungwhank/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Fix typo of `add_code_sample_docstrings` in `file_utils.py`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9289/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9289",
"html_url": "https://github.com/huggingface/transformers/pull/9289",
"diff_url": "https://github.com/huggingface/transformers/pull/9289.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9289.patch",
"merged_at": 1608797914000
} |
https://api.github.com/repos/huggingface/transformers/issues/9288 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9288/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9288/comments | https://api.github.com/repos/huggingface/transformers/issues/9288/events | https://github.com/huggingface/transformers/pull/9288 | 774,169,212 | MDExOlB1bGxSZXF1ZXN0NTQ1MTI4Mzg5 | 9,288 | [doc] How To Request Support document stab | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I really like the idea of adding such a document. Two general things:\r\n\r\n1) I think we should make this document a bit more positive and not a \"mandatory read\" before posting an issue. IMO, in general we are (or should be) happy about all issues. Even if the issue is very badly formulated, it gives us a good signal on how the users work with the library and which features are more and which are less used.\r\n\r\n2) I'd make a clearer distinction with what should be in the forum and what should be an issue. I've probably already answered ~30 times on issues that the user should please redirect the question to the forum. I'd like to make a clearer distinction here:\r\n\r\nThe issues should ideally only be used for bug reports.\r\n\r\n**_Forum:_** All \"please explain\" questions or objectively very user-specific feature requests should land in the forum. IMO those should never land in the issues. What I mean by that are *e.g.* the following kinds of issues:\r\n\r\ni. \"I would like to use a BertModel within a RL-Agent for a customer support service. How can I use a `BertForMaskedLM` in my `ChatBotModel`?\"\r\n\r\nii. \"Could you please explain why T5 has no positional embedding matrix under `T5Model`?\"\r\n\r\niii. \"How should I set my generation parameters for translation?\"\r\n\r\niiii. \"How to train T5 on De->En translation?\" \r\n\r\n=> all these kinds of questions do not belong to the issues IMO. None of those issues hint at a bug in Transformers and have definitely a better place in the forum IMO. But, again, we **do** want people to ask exactly these questions and I'm more than happy to answer all of them (maybe i. a bit less). Not sure what @thomwolf @LysandreJik @sgugger think here.\r\n\r\n**_Issues:_** Everything which hints at a bug should be opened as an issue in Transformers. Here again, I'd like to encourage people to open issues as I'm much happier with users posting an objectively badly written issue than with users discovering an issue, but being afraid to post it in Transformers. Having said this, I really like the points you've written down so far @stas00! One thing, I like to add (as one of the first points) is that users should google (or whatever SE) their issue before opening the exact same one (yes using a SE with \"your issue here\" + \"transformers\" + \"huggingface\" often gives better results than searching on github itself). I often just link a new issue to an already answered one (which is also not that bad since it shows us again which parts of the transformer are heavily used). And I think in some cases it is fine if the user posts a link to a colab if it's bug absolutely requires a big data set.\r\n\r\nIn general, I think such a document can save us a lot of time because we can just link to it on issues that are badly written and usually are as a consequence just ignored by us. Think the author of a \"badly\" written issue can then better understand why we sometimes stop answering.\r\n",
"Great suggestions, @patrickvonplaten - Thank you - I hope I integrated them well.\r\n\r\nI think at this point we are in the gathering stage - so bring on the ideas and points you feel are important.\r\n\r\nWhen this stage is done we will do a final edit so that the document feels most welcoming to the users.",
"All comments/suggestions you have kindly offered so far have been addressed. A vous!",
"> After applying Patrick's suggestion, I think this document is in excellent shape. One nit I would have is that it's written in perfect English. We have a lot of users that are not native speakers, and possibly won't understand some parts of it. Then again, I don't think we should reduce the quality of this document, but let's think about what we can do in general to be friendly to non-English speaking users/contributors.\r\n\r\nOh, thank you for the complement as I'm one very non-native English speaker ;) But I totally hear what you mean.\r\n\r\nPerhaps the simplest thing to do is to add a note that if someone struggles with understanding anything in the document, they can ask us to make it more easy to understand? That way it doesn't have to be imperfectly perfect from the get-going.",
"Merging, thanks a lot @stas00!",
"Copied this into [this forum post](https://discuss.huggingface.co/t/how-to-request-support/3128) and added a link to it in the document in [this commit](https://github.com/huggingface/transformers/commit/6009668c631aa5773c66aa30c6bfd9c191e2a6be)."
] | 1,608 | 1,610 | 1,610 | CONTRIBUTOR | null | As discussed it'd be great to have a clear document with guidelines at how to create great Issues that are easy to understand, reproduce and resolve. I wrote this stab of a document to get things started.
Please feel free to edit it further to your satisfaction. I'm not attached to any parts it, just brain dumping what was coming based on my experience. So feel free to add remove, reformat, etc. Best commit your edits directly into this branch or second best via suggestions. I don't need to be the middleman.
Please tag others that you think may want to contribute to this document.
If this is successful, perhaps a similar document will be needed for PRs - or perhaps down the road it will be a single document as there is a lot of overlap between writing a good Issue and a good PR. But let's start simple.
@sgugger, @patrickvonplaten, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9288/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9288",
"html_url": "https://github.com/huggingface/transformers/pull/9288",
"diff_url": "https://github.com/huggingface/transformers/pull/9288.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9288.patch",
"merged_at": 1610375031000
} |
https://api.github.com/repos/huggingface/transformers/issues/9287 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9287/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9287/comments | https://api.github.com/repos/huggingface/transformers/issues/9287/events | https://github.com/huggingface/transformers/issues/9287 | 774,158,794 | MDU6SXNzdWU3NzQxNTg3OTQ= | 9,287 | SummarizationModule, Trainer and BertPreTrainedModel | {
"login": "ziqi-zhang",
"id": 17118335,
"node_id": "MDQ6VXNlcjE3MTE4MzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/17118335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ziqi-zhang",
"html_url": "https://github.com/ziqi-zhang",
"followers_url": "https://api.github.com/users/ziqi-zhang/followers",
"following_url": "https://api.github.com/users/ziqi-zhang/following{/other_user}",
"gists_url": "https://api.github.com/users/ziqi-zhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ziqi-zhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ziqi-zhang/subscriptions",
"organizations_url": "https://api.github.com/users/ziqi-zhang/orgs",
"repos_url": "https://api.github.com/users/ziqi-zhang/repos",
"events_url": "https://api.github.com/users/ziqi-zhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ziqi-zhang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ziqi-zhang,\r\nthe distillation code is written using the `pytorch-lightning` framework and `SummarizationModule` is a lightning module. You should go through the lightning docs to see how training works for lightning modules and how to customize it.\r\n\r\n`Trainer` is transformers training helper which is different from lightning and there is no connection between the two.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | NONE | null | Hi,
I wonder what's the relationship between SummarizationModule (SummarizationDistiller), Trainer and BertPreTrainedModel? I want to reimplement the distillation.py of seq2seq example and run it on the glue dataset. But I'm confused by the relationship between these three classes.
The model parameter of Trainer is BertPreTrainedModel and the Trainer train the model. But in the SummarizationDistiller there isn't a training function. I didn't find the training process in the distillation.py and finetune.py. Should I pass SummarizationDistiller object to Trainer to train the model? Or how should I train my custom SummarizationDistiller. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9287/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9286 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9286/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9286/comments | https://api.github.com/repos/huggingface/transformers/issues/9286/events | https://github.com/huggingface/transformers/issues/9286 | 774,124,634 | MDU6SXNzdWU3NzQxMjQ2MzQ= | 9,286 | Why Bert-chinese use do_lower_case=False? | {
"login": "Fei-Wang",
"id": 11441526,
"node_id": "MDQ6VXNlcjExNDQxNTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/11441526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fei-Wang",
"html_url": "https://github.com/Fei-Wang",
"followers_url": "https://api.github.com/users/Fei-Wang/followers",
"following_url": "https://api.github.com/users/Fei-Wang/following{/other_user}",
"gists_url": "https://api.github.com/users/Fei-Wang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fei-Wang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fei-Wang/subscriptions",
"organizations_url": "https://api.github.com/users/Fei-Wang/orgs",
"repos_url": "https://api.github.com/users/Fei-Wang/repos",
"events_url": "https://api.github.com/users/Fei-Wang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fei-Wang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hmm, I don't really know how to best answer your question here...maybe @JetRunner ?",
"using lowercase=False preserves the information about casing and this information maybe helpful in the context of the some work. such as for sentiment analysis i think casing is not important there but for the task like NER it maybe useful. Again but for many task where we deal with multiple languages is it recommended to use cased model because i think every language has its own grammar and syntax and maybe casing helps in some or the other way?\r\n\r\nCould this be the reason why Bert-chinese uses lowercase=False by default?",
"I think your explanation makes sense. \nBut there are no capital letters in the Chinese vocab.txt, so all words contain capitals will be regarded as [unk].\n\nSent from my iPhone\n\n> On Dec 25, 2020, at 4:57 AM, Shubham kumar <[email protected]> wrote:\n> \n> \n> using lowercase=False preserves the information about casing and this information maybe helpful in the context of the some work. such as for sentiment analysis i think casing is not important there but for the task like NER it maybe useful. Again but for many task where we deal with multiple languages is it recommended to use cased model because i think every language has its own grammar and syntax and maybe casing helps in some or the other way?\n> \n> Could this be the reason why Bert-chinese uses lowercase=False by default?\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n",
"Well I would say the design of Chinese BERT is not necessarily the best. It makes sense to use only lower cases to resolve the data sparsity since there are not many English sentences in Chinese Wikipedia.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,618 | 1,618 | CONTRIBUTOR | null | Some Chinese Text has some English words, for example: "Apples是苹果的复数形式。". I have questions about how to tokenize the text:
1. why Chinese Bert Case sensitive, but I can't find even 'A' in vocab.txt
2. Because English words in Chinese vocab.txt is few, should I use wordpiece tokenizer as default, like "['apple', '##s', '是', '苹', ...]"or split to char to tokenize, like "['a', 'p', 'p', 'l', 'e', 's', '是', '苹', ...]"? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9286/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9285 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9285/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9285/comments | https://api.github.com/repos/huggingface/transformers/issues/9285/events | https://github.com/huggingface/transformers/issues/9285 | 774,110,438 | MDU6SXNzdWU3NzQxMTA0Mzg= | 9,285 | TFRobertaModel warning - `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated | {
"login": "eyalshafran",
"id": 16999574,
"node_id": "MDQ6VXNlcjE2OTk5NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/16999574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyalshafran",
"html_url": "https://github.com/eyalshafran",
"followers_url": "https://api.github.com/users/eyalshafran/followers",
"following_url": "https://api.github.com/users/eyalshafran/following{/other_user}",
"gists_url": "https://api.github.com/users/eyalshafran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyalshafran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyalshafran/subscriptions",
"organizations_url": "https://api.github.com/users/eyalshafran/orgs",
"repos_url": "https://api.github.com/users/eyalshafran/repos",
"events_url": "https://api.github.com/users/eyalshafran/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyalshafran/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nThese two messages are just a warning, you can ignore them if you are not concerned. Basically, these messages will always be displayed everytime the graph node is executed, and only in graph mode.",
"Is there a way to suppress these warnings? They overwhelm the logs with useless messages...",
"@jplu the issue still persists, what is the purpose of these warnings? Why are they displayed to the user of the library?",
"@jklaise @Gilthans The logging happens through a TF Logger (See [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L50)) . \r\n\r\nYou can suppress them by using something like `tf.get_logger().setLevel('ERROR')`"
] | 1,608 | 1,631 | 1,609 | NONE | null | I'm using:
google colab
transformers 4.1.1
tensorflow 2.4.0 (with gpu)
model TFRobertaModel
I keep getting a warning when calling TFRobertaModel. For example:
```
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
config = RobertaConfig.from_pretrained('roberta-base')
roberta_layer = TFRobertaModel(config)
max_seq_len = 64
ids = tf.keras.layers.Input((max_seq_len,), dtype=tf.int32)
att = tf.keras.layers.Input((max_seq_len,), dtype=tf.int32)
tok = tf.keras.layers.Input((max_seq_len,), dtype=tf.int32)
roberta_inputs = [ids, att, tok]
sequence_output = roberta_layer(ids,attention_mask=att,token_type_ids=tok) # this produces the message
```
Produces the following message:
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model. They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
I tried to set the variables in the config object but there is no change in the message:
```
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
config = RobertaConfig.from_pretrained('roberta-base',output_attentions=False,output_hidden_states=False,return_dict =True)
roberta_layer = TFRobertaModel(config)
```
@jplu, @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9285/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9284 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9284/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9284/comments | https://api.github.com/repos/huggingface/transformers/issues/9284/events | https://github.com/huggingface/transformers/pull/9284 | 774,102,554 | MDExOlB1bGxSZXF1ZXN0NTQ1MDc4NDQ3 | 9,284 | [Templates] Adapt Bert | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adapt Bert-like templates following https://github.com/huggingface/transformers/pull/9183. This should fix the templates test on master.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9284/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9284",
"html_url": "https://github.com/huggingface/transformers/pull/9284",
"diff_url": "https://github.com/huggingface/transformers/pull/9284.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9284.patch",
"merged_at": 1608770673000
} |
https://api.github.com/repos/huggingface/transformers/issues/9283 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9283/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9283/comments | https://api.github.com/repos/huggingface/transformers/issues/9283/events | https://github.com/huggingface/transformers/pull/9283 | 773,930,861 | MDExOlB1bGxSZXF1ZXN0NTQ0OTM3MTQx | 9,283 | Fix TF DPR | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten , I reworked a bit the approach. Now `TFDPREncoder` and `TFDPRSpanPredictor` are still models and keep their features from `TFPreTrainedModel` while all the DPR models benefit of the serving.\r\n\r\nAll the slow/quick tests are still passing.",
"All @sgugger comments have been addressed."
] | 1,608 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
This PR rework the DPR architecture for its TF version. This rework allows DPR models to be saved as proper saved model.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9283/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9283",
"html_url": "https://github.com/huggingface/transformers/pull/9283",
"diff_url": "https://github.com/huggingface/transformers/pull/9283.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9283.patch",
"merged_at": 1609777616000
} |
https://api.github.com/repos/huggingface/transformers/issues/9282 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9282/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9282/comments | https://api.github.com/repos/huggingface/transformers/issues/9282/events | https://github.com/huggingface/transformers/pull/9282 | 773,901,084 | MDExOlB1bGxSZXF1ZXN0NTQ0OTEyOTU4 | 9,282 | Adapt to new name of `label_smoothing_factor` training arg | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
This PR changes `label_smoothing` to its new name `label_smoothing_factor` in the tests and scripts that use it.
Pinging @stas00 for information but will merge when CI is passing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9282/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9282/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9282",
"html_url": "https://github.com/huggingface/transformers/pull/9282",
"diff_url": "https://github.com/huggingface/transformers/pull/9282.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9282.patch",
"merged_at": 1608739522000
} |
https://api.github.com/repos/huggingface/transformers/issues/9281 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9281/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9281/comments | https://api.github.com/repos/huggingface/transformers/issues/9281/events | https://github.com/huggingface/transformers/pull/9281 | 773,734,818 | MDExOlB1bGxSZXF1ZXN0NTQ0NzY0MDI4 | 9,281 | tapas utils | {
"login": "shashankMadan-designEsthetics",
"id": 45225143,
"node_id": "MDQ6VXNlcjQ1MjI1MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/45225143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashankMadan-designEsthetics",
"html_url": "https://github.com/shashankMadan-designEsthetics",
"followers_url": "https://api.github.com/users/shashankMadan-designEsthetics/followers",
"following_url": "https://api.github.com/users/shashankMadan-designEsthetics/following{/other_user}",
"gists_url": "https://api.github.com/users/shashankMadan-designEsthetics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashankMadan-designEsthetics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashankMadan-designEsthetics/subscriptions",
"organizations_url": "https://api.github.com/users/shashankMadan-designEsthetics/orgs",
"repos_url": "https://api.github.com/users/shashankMadan-designEsthetics/repos",
"events_url": "https://api.github.com/users/shashankMadan-designEsthetics/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashankMadan-designEsthetics/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9281/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9281",
"html_url": "https://github.com/huggingface/transformers/pull/9281",
"diff_url": "https://github.com/huggingface/transformers/pull/9281.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9281.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9280 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9280/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9280/comments | https://api.github.com/repos/huggingface/transformers/issues/9280/events | https://github.com/huggingface/transformers/issues/9280 | 773,680,052 | MDU6SXNzdWU3NzM2ODAwNTI= | 9,280 | issue with evaluation of seq2seq_trainer.py on multiple gpus | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello @rabeehk,\r\n\r\nWe sadly cannot fix issues of different repos, such as `rabeehk/seq2seq.git` - this is too time-consuming and not really our responsibility. We're happy to assist if you could provide a **short, precise, and complete** code snippet that is based on Transformers `Seq2SeqTrainer` only.",
"@rabeehk, I think you may have not considered that open source projects are not a help desk. If you are going to continue in the same fashion you will not get any answers at all.\r\n\r\nMany people ask for help but you need to think how to ask for help so that it's easy for the developers to quickly understand what is going on, reproduce the problem and solve what needs to be solved. But if you dump 1000 line logs and say help me to fix this without investigating it first yourself you will not get anywhere here.\r\n\r\nFor example, in your 1000 line log dump in OP if you look closely you will see that the error is on your side since it tells you:\r\n```\r\noutputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found\r\n```\r\ni.e. your setup is broken.\r\n\r\nSo you didn't really study the problem and yet want us to do this for you.\r\n\r\nThat's said I personally will not do it again, so please don't tag me unless it's related to what I'm working on and you found a bug in the code I wrote or maintain.\r\n\r\nAlso tagging multiple developers out of context is frowned upon - you tagged me on this issue:\r\n\r\n> Who can help\r\n> FSMT: @stas00\r\n\r\nwhat does it have to do with FSMT? \r\n\r\nThe tagging info is to help users to direct their questions to the right developers who are maintainers of certain domains. They can then decide at their own discretion to tag other developers if they feel it'd help move the issue forward. If you tag multiple people out of context you will gain no support. \r\n\r\nIf you are not willing to invest energy and time into investigating the problems you encounter and forming quality questions, please consider hiring someone who will be willing to answer the multitude of your questions and sort things out for you. Perhaps ask at the forums if someone is willing to work with you professionally where you pay them for the services provided.\r\n\r\nI hope this comment has been useful and trust you will find a way to receive the support you need.\r\n",
"Yes @stas00 is completely right. @rabeehk your comments are borderline spammy. \r\n\r\nWe try to help as much as possible but you also need to put in the work so the community can actually help you efficiently.",
"Hi Stephan, Hi Julien\nplease find my responses below:\n\nOn Wed, Dec 23, 2020 at 6:29 PM Stas Bekman <[email protected]>\nwrote:\n\n> rabeehk, I think you may have not considered that open source projects are\n> not a help desk. If you are going to continue in the same fashion you will\n> not get any answers at all.\n>\nSorry if this looks like a spam to you, but I really still think this was a\nbug, if you try to load the model twice inside the finetune_trainer.py in\nevaluation part, which is something the user might well need when one wants\nto apply more chnages to the trained model before evaluation, you would see\nthis is not multi-process safe resulting in the bug I reported.\n\n> Many people ask for help but you need to think how to ask for help so that\n> it's easy for the developers to quickly understand what is going on,\n> reproduce the problem and solve what needs to be solved. But if you dump\n> 1000 line logs and say help me to fix this without investigating it first\n> yourself you will not get anywhere here.\n>\nSorry I thought providing full logs help, if this is not sure, I would not\nprovide the full logs, I still included the one line error message above\nthese logs, I did investigate the issue myself, and I realized this is not\nmulti-process safe as I mentioned.\n\n> For example, in your 1000 line log dump in OP if you look closely you will\n> see that the error is on your side since it tells you:\n>\n> outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found\n>\n> no the setup is not broken, the files are there as I said, please read my\nbug report carefully, this is the result of the bug I said.\n\n> i.e. your setup is broken.\n>\n> So you didn't really study the problem and yet want us to do this for you.\n>\n> not really, I investigated it for hours and hours in fact.\n\n> That's said I personally will not do it again, so please don't tag me\n> unless it's related to what I'm working on and you found a bug in the code\n> I wrote or maintain.\n>\n> Sure, I was thinking you are working on seq2seq from various updates on\nthis, sorry for the mistake,\n\n> Also tagging multiple developers out of context is frowned upon - you\n> tagged me on this issue:\n>\n> Who can help\n> FSMT: @stas00 <https://github.com/stas00>\n>\n> what does it have to do with FSMT?\n>\nsorry for the mistake, I explained this above, I was really thinking you\nare working on seq2seq and though this is relevant.\n\n> The tagging info is to help users to direct their questions to the right\n> developers who are maintainers of certain domains. They can then decide to\n> tag other developers if they feel it'd help the issue along. If you tag\n> multiple people out of context you will gain no support.\n>\n> I was really mistaken thinking this is relevant.\n\n> If you are not willing to invest energy and time into investigating the\n> problems you encounter and forming quality questions, please consider\n> hiring someone who will be willing to answer the multitude of your\n> questions and sort things out for you. Perhaps ask at the forums if someone\n> is willing to work with you professionally where you pay them for the\n> services provided.\n>\nNo, this is not correct, I did spent hours and hours on this, to me this is\nstill a bug.\n\n> I hope this comment has been useful and trust you will find a way to\n> receive the support you need.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/9280#issuecomment-750399462>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCBMPNG4IN2NE5DGRPDSWISH7ANCNFSM4VG3SIFA>\n> .\n>\n",
"I also should say putting comments like your last paragraph @stas00 is inappropripate. No matter I beleive this is a bug you think this is a spam, no matter in which position you are, no matter I mistakenly thought providing full logs helps, please behave people with respect. I still believe this is a bug.\r\n\r\nOn Wed, Dec 23, 2020, 8:43 PM Rabeeh Karimi <[email protected]> wrote:\r\n\r\n> Hi Stephan, Hi Julien\r\n> please find my responses below:\r\n>\r\n> On Wed, Dec 23, 2020 at 6:29 PM Stas Bekman <[email protected]>\r\n> wrote:\r\n>\r\n>> rabeehk, I think you may have not considered that open source projects\r\n>> are not a help desk. If you are going to continue in the same fashion you\r\n>> will not get any answers at all.\r\n>>\r\n> Sorry if this looks like a spam to you, but I really still think this was\r\n> a bug, if you try to load the model twice inside the finetune_trainer.py in\r\n> evaluation part, which is something the user might well need when one wants\r\n> to apply more chnages to the trained model before evaluation, you would see\r\n> this is not multi-process safe resulting in the bug I reported.\r\n>\r\n>> Many people ask for help but you need to think how to ask for help so\r\n>> that it's easy for the developers to quickly understand what is going on,\r\n>> reproduce the problem and solve what needs to be solved. But if you dump\r\n>> 1000 line logs and say help me to fix this without investigating it first\r\n>> yourself you will not get anywhere here.\r\n>>\r\n> Sorry I thought providing full logs help, if this is not sure, I would not\r\n> provide the full logs, I still included the one line error message above\r\n> these logs, I did investigate the issue myself, and I realized this is not\r\n> multi-process safe as I mentioned.\r\n>\r\n>> For example, in your 1000 line log dump in OP if you look closely you\r\n>> will see that the error is on your side since it tells you:\r\n>>\r\n>> outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found\r\n>>\r\n>> no the setup is not broken, the files are there as I said, please read my\r\n> bug report carefully, this is the result of the bug I said.\r\n>\r\n>> i.e. your setup is broken.\r\n>>\r\n>> So you didn't really study the problem and yet want us to do this for you.\r\n>>\r\n>> not really, I investigated it for hours and hours in fact.\r\n>\r\n>> That's said I personally will not do it again, so please don't tag me\r\n>> unless it's related to what I'm working on and you found a bug in the code\r\n>> I wrote or maintain.\r\n>>\r\n>> Sure, I was thinking you are working on seq2seq from various updates on\r\n> this, sorry for the mistake,\r\n>\r\n>> Also tagging multiple developers out of context is frowned upon - you\r\n>> tagged me on this issue:\r\n>>\r\n>> Who can help\r\n>> FSMT: @stas00 <https://github.com/stas00>\r\n>>\r\n>> what does it have to do with FSMT?\r\n>>\r\n> sorry for the mistake, I explained this above, I was really thinking you\r\n> are working on seq2seq and though this is relevant.\r\n>\r\n>> The tagging info is to help users to direct their questions to the right\r\n>> developers who are maintainers of certain domains. They can then decide to\r\n>> tag other developers if they feel it'd help the issue along. If you tag\r\n>> multiple people out of context you will gain no support.\r\n>>\r\n>> I was really mistaken thinking this is relevant.\r\n>\r\n>> If you are not willing to invest energy and time into investigating the\r\n>> problems you encounter and forming quality questions, please consider\r\n>> hiring someone who will be willing to answer the multitude of your\r\n>> questions and sort things out for you. Perhaps ask at the forums if someone\r\n>> is willing to work with you professionally where you pay them for the\r\n>> services provided.\r\n>>\r\n> No, this is not correct, I did spent hours and hours on this, to me this\r\n> is still a bug.\r\n>\r\n>> I hope this comment has been useful and trust you will find a way to\r\n>> receive the support you need.\r\n>>\r\n>> —\r\n>> You are receiving this because you were mentioned.\r\n>> Reply to this email directly, view it on GitHub\r\n>> <https://github.com/huggingface/transformers/issues/9280#issuecomment-750399462>,\r\n>> or unsubscribe\r\n>> <https://github.com/notifications/unsubscribe-auth/ABP4ZCBMPNG4IN2NE5DGRPDSWISH7ANCNFSM4VG3SIFA>\r\n>> .\r\n>>\r\n>\r\n",
"The point we are trying to communicate is that you need to review how you communicate, @rabeehk. Your communications come across as too much and too indiscriminate.\r\n\r\nI'm totally accepting that you might be unaware of what is expected in good communications and perhaps HuggingFace needs to have a guidelines document at how users can ask for help in the most efficient way for all involved parties.\r\n\r\nAs of this moment I'd happy to invest a bit of my free time to support you to find a way for you to become an asset to this community and not an annoyance. If you are willing to listen and take action:\r\n\r\n1. Anybody looking at your first post will have an urge to flee - it's scary in its length and most people will not even try to understand what could be a very valid issue.\r\n\r\n So you need to edit the first post to remove any information that's not pertaining to the issue at hand. e.g. all those Download xx% logs are totally useless. You're saying you are attaching the traceback, but you're attaching the full log. I accept that you might have not known that.\r\n\r\n Attaching a full log can be helpful if it's done as an attachment, a link to a paste.bin or at the very least if you enclosed it in:\r\n\r\n```\r\n<details>\r\n<summary>Full log</summary>\r\n<pre>\r\n\r\nmany\r\nlines \r\ngo\r\nhere\r\n\r\n</pre>\r\n</details>\r\n```\r\n Here is an example of the outcome:\r\n\r\n<details>\r\n<summary>Full log</summary>\r\n<pre>\r\nmany\r\nlines \r\ngo\r\nhere\r\n</pre>\r\n</details>\r\n\r\n2. As @patrickvonplaten replied to you, you can't ask someone to go into your repository and figure out what you may have done. The code is already very complex and unless there is an easy way to do a diff and it's a small diff, nobody has the time to investigate. So you need to spend time to find a way to reproduce the problem in a minimal example which should introduce no more than a few lines of code change-wise (of course, there are exceptions, but this is more of norm).\r\n\r\n Usually the best way is to just show the relevant backtrace (in DDP just one of them, as each process will dump a copy), the command line and then ask if there is anything else that you could supply to help the developer reproduce the problem.\r\n\r\n3. Try to use the latest official version. We have no resources to go and debug older revisions, which could easily have bugs that have been fixed in the latest released version.\r\n\r\n I understand that this is not always possible. But this is the best way if it fits.\r\n\r\n4. Most of the time you can't ask to test with your data, since we don't have your data. So either you should use some existing dataset supported by HF datasets or you need to have the code that generates a small sample on the fly.\r\n\r\n5. Do not tag multiple people on the issue unless you know this is expected, either because you asked them and they gave you an explicit permission or the Issue template instructs you to do so. Having someone help you like I'm doing now is not an invitation to tag that person in the future on all your issues.\r\n\r\n I can see why you chose to tag me by looking at seq2seq commits, and while I made a few small changes in seq2seq recently, it just happened to be so because I was working on something totally unrelated and there were some changes that were required for me to proceed. But I'm not in charge of that domain.\r\n\r\n Remember that every time you tag someone, they get a notification and you're taking their time w/o their permission. Please be sensitive to that.\r\n\r\n6. Use the edit button. Delete and merge multiple comments into one if nobody followed up yet. As you merge them edit them to be coherent. Use bullets and items if it makes sense.\r\n\r\n I know my first comment version almost always comes out with typos and can be incoherent or too verbose. If you look at my comments' history I often make a ton of re-edits, since I want to make sure my communication is as clear as possible (and I know I myself can be too verbose, mea culpa). \r\n \r\n---------------\r\n\r\nThe key message of this comment is that when you ask for support you are given a tiny sliver of developer's time and you need to quickly communicate the essentials of the issue at hand. You're not expected to be born with that knowledge. You're not expected to be perfect at it. If I may recommend - learn from issues posted by other people - see which issues get responses and which are ignored - learn what the posters who did get responses did right. It's a simple pattern matching with some patience.\r\n\r\nThere is no harm in asking: \"look, I have all these questions and I don't know how to ask them in the best possible way. Can someone help?\"\r\n\r\nPerhaps, you need to find a mentor in the community at the forums, by asking if someone can support you to help you find a way to file good issues.\r\n\r\nWhen you tune up your communications then the developers of this fabulous project will be more than happy to address and resolve the issues you raise. This skill, of course, will help you at any other open source project.\r\n\r\nPlease let me know if you found this helpful. And perhaps if you'd like to continue this discussion because you need further clarifications let's go to the forums https://discuss.huggingface.co/ and leave the Issues section alone for now. You have my permission to tag me (just @stas) in the forums for this particular discussion if you think it'd be helpful to you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 3.5.1
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): -
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
Trainer: @sgugger
## Information and commands to reproduce
I am running seq2seq trainer model on multiple GPUS. The problem arises during evaluation, here are my modified seq2seq_trainer.py codes and how to reproduce the error:
```
git clone [email protected]:rabeehk/seq2seq.git
python setup.py develop
cd seq2seq
python -m torch.distributed.launch --nproc_per_node=4 --master_port=9918 finetune_t5_trainer.py temp_configs/mp-lr-3e-02-r-8-l-true.json
```
here is the error I get during the evaluation:
`file outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found
`
## Bug description
I figured out in finetune_trainer.py if a user load config and trained model again during the evaluation, this operation results in processes not being able to find the config file, showing this is not multi-gpu safe, I realized if I add `trainer.is_world_process_zero:` beforehand and reload model and config in evaluation part, this resolves the issue, could you please comment on this and assist with this issue? I think this is a bug if the user cannot load the model in the evaluation part in multiple-gpus properly. Thanks.
## Full error stack.
```
I 1222 15:38:32.129721 2784 main shadow.py:122] > Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 388, in get_config_dict
file outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found
I 1222 15:38:32.129757 2784 main shadow.py:122] > Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 388, in get_config_dict
Traceback (most recent call last):
I 1222 15:38:32.129800 2784 main shadow.py:122] >
I 1222 15:38:32.130028 2784 main shadow.py:122] > local_files_only=local_files_only,
I 1222 15:38:32.130069 2784 main shadow.py:122] > File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 962, in cached_path
I 1222 15:38:32.130314 2784 main shadow.py:122] > raise EnvironmentError("file {} not found".format(url_or_filename))
I 1222 15:38:32.130367 2784 main shadow.py:122] > OSError: file outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found
I 1222 15:38:32.130448 2784 main shadow.py:122] > During handling of the above exception, another exception occurred:
I 1222 15:38:32.130505 2784 main shadow.py:122] >
I 1222 15:38:32.130557 2784 main shadow.py:122] > OSError: file outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found
I 1222 15:38:32.130720 2784 main shadow.py:122] > Traceback (most recent call last):
OSError: file outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found
I 1222 15:38:32.130823 2784 main shadow.py:122] > File "./finetune_t5_trainer.py", line 328, in <module>
I 1222 15:38:32.130858 2784 main shadow.py:122] >
I 1222 15:38:32.130903 2784 main shadow.py:122] > During handling of the above exception, another exception occurred:
I 1222 15:38:32.130938 2784 main shadow.py:122] >
I 1222 15:38:32.130976 2784 main shadow.py:122] > Traceback (most recent call last):
File "./finetune_t5_trainer.py", line 207, in main
main()
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 347, in from_pretrained
cache_dir=model_args.cache_dir)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 347, in from_pretrained
100%|??????????| 2100/2100 [36:51<00:00, 1.05s/it]
I 1222 15:38:32.131012 2784 main shadow.py:122] > cache_dir=model_args.cache_dir)
I 1222 15:38:32.131047 2784 main shadow.py:122] > File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 347, in from_pretrained
I 1222 15:38:32.131081 2784 main shadow.py:122] > config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
I 1222 15:38:32.131301 2784 main shadow.py:122] > File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 400, in get_config_dict
I 1222 15:38:32.131337 2784 main shadow.py:122] > raise EnvironmentError(msg)
I 1222 15:38:32.131372 2784 main shadow.py:122] > OSError: Can't load config for 'outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true'. Make sure that:
I 1222 15:38:32.131432 2784 main shadow.py:122] > - 'outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true' is a correct model identifier listed on 'https://huggingface.co/models'
I 1222 15:38:32.131615 2784 main shadow.py:122] > raise EnvironmentError(msg)
I 1222 15:38:32.131670 2784 main shadow.py:122] > OSError: Can't load config for 'outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true'. Make sure that:
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9280/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9279 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9279/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9279/comments | https://api.github.com/repos/huggingface/transformers/issues/9279/events | https://github.com/huggingface/transformers/pull/9279 | 773,677,497 | MDExOlB1bGxSZXF1ZXN0NTQ0NzE5Njkx | 9,279 | [Refactor] Splitting pipelines.py into its own module. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think I fixed all of them. \r\n\r\nWhen moving everything around I felt more comfortable switching temporarily to absolute imports forgot to switch back.\r\n"
] | 1,608 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
Moves various pipelines into their own files.
`pipelines.py` was 3k+ lines of code with feels a bit too much. To go along with the `models` split
into various files, splitting into a cleaner module with subfiles was proposed by @thomwolf
(Can't find the discussion).
There's at least 3 parts than need to be explicited.
- The *glue code* that makes `pipeline` so powerful (loading up the right task for the right model with right task, basically completing all the wholes based on the call signature). That's `__init__.py`
- The main class `Pipeline` that mutualises a lot of the boilerplate. Thats `base.py`.
- All the specialized classes `NerPipeline`, `FeatureExtractionPipeline`, ... that's the other files.
All the tests remains strictly the same to ensure there's no breaking change in there.
The main issue with that PR is that now its a bit harder to check the various code flow.
Some is in the base, some is in a specialized file.
`TranslationPipeline`, `SummarizationPipeline` and `Text2TextPipeline` are in the same file `text2text_generation.py` as they seem to share quit a bit of code, maybe some cleanup and better code sharing is imaginable in a follow-up PR (https://github.com/Narsil/transformers/pull/1), at the very least modifications in one, probably should be duplicated in the others as they use the same underlying models (Seq2Seq).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@thomwolf
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9279/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9279/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9279",
"html_url": "https://github.com/huggingface/transformers/pull/9279",
"diff_url": "https://github.com/huggingface/transformers/pull/9279.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9279.patch",
"merged_at": 1609922031000
} |
https://api.github.com/repos/huggingface/transformers/issues/9278 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9278/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9278/comments | https://api.github.com/repos/huggingface/transformers/issues/9278/events | https://github.com/huggingface/transformers/pull/9278 | 773,665,199 | MDExOlB1bGxSZXF1ZXN0NTQ0NzA5NjYx | 9,278 | LED | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2669577093,
"node_id": "MDU6TGFiZWwyNjY5NTc3MDkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PR%20for%20Model%20Addition",
"name": "PR for Model Addition",
"color": "5319e7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"@patrickvonplaten when you have time, can you fix the conflicts and apply the same updates merged in Longformer to LED. Thanks!"
] | 1,608 | 1,611 | 1,609 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds LongformerEncoderDecoder (LED) from @ibeltagy - see: https://github.com/allenai/longformer#longformer
Todo:
- [x] **Important**: position embeddings have to be cut to correctly convert original Bart-ilke checkpoints to LED. The reason is that Bart uses a position embedding hack because of which the embedding idx 0 and 1 are never used resulting in an embedding matrix that has a length of 1026 instead of 1024, see: https://github.com/huggingface/transformers/blob/88ef8893cd649cc2b4adb9885aba88c750118cff/src/transformers/models/bart/modeling_bart.py#L131. All LED checkpoints are hence cut to remove this hack in LED:
```python
model = LEDForConditionalGeneration.from_pretrained("./led-base-16384")
model.model.encoder.embed_positions.weight = torch.nn.Parameter(model.model.encoder.embed_positions.weight[2:, :])
model.model.decoder.embed_positions.weight = torch.nn.Parameter(model.model.decoder.embed_positions.weight[2:, :])
model.save_pretrained("./led-base-16384")
```
- [x] Make Pytorch integration tests pass. See `LEDIntegrationTests` in `tests/test_modeling_led.py`.
- [x] Add gradient_checkpointing
- [x] Make common tests work
- [x] Add convenient padding function so that input can be of whatever size and add global_attn logic to mask
- [x] Automatically create attention_mask in encoder if not provided
- [x] Finish PT version
- [x] Make TF version work
- [x] Add tips in docs for LED
- [x] Eval notebook: https://colab.research.google.com/drive/12INTTR6n64TzS4RrXZxMSXfrOd9Xzamo?usp=sharing
- [x] Nice to have: Fine-tune notebook: https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing
- [ ] Make nice model cards
## TODO after PR is merged:
- [ ] Correctly add `# Copied from ....` statements from Bart and Longformer (this probably requires the Bart refactor to be merged before)
- [ ] Open issue regarding problems with TF save_model test
- [ ] Correct templates: delete unnecessary test for tf bart; add gradient checkpointing by default in PT
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9278/reactions",
"total_count": 6,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9278/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9278",
"html_url": "https://github.com/huggingface/transformers/pull/9278",
"diff_url": "https://github.com/huggingface/transformers/pull/9278.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9278.patch",
"merged_at": 1609848871000
} |
https://api.github.com/repos/huggingface/transformers/issues/9277 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9277/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9277/comments | https://api.github.com/repos/huggingface/transformers/issues/9277/events | https://github.com/huggingface/transformers/pull/9277 | 773,651,223 | MDExOlB1bGxSZXF1ZXN0NTQ0Njk4NzU0 | 9,277 | [Seq2Seq Templates] Fix check_repo.py templates file | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9277/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9277",
"html_url": "https://github.com/huggingface/transformers/pull/9277",
"diff_url": "https://github.com/huggingface/transformers/pull/9277.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9277.patch",
"merged_at": 1608720021000
} |
https://api.github.com/repos/huggingface/transformers/issues/9276 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9276/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9276/comments | https://api.github.com/repos/huggingface/transformers/issues/9276/events | https://github.com/huggingface/transformers/issues/9276 | 773,648,073 | MDU6SXNzdWU3NzM2NDgwNzM= | 9,276 | Vision Transformer | {
"login": "czabo",
"id": 75574105,
"node_id": "MDQ6VXNlcjc1NTc0MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/75574105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czabo",
"html_url": "https://github.com/czabo",
"followers_url": "https://api.github.com/users/czabo/followers",
"following_url": "https://api.github.com/users/czabo/following{/other_user}",
"gists_url": "https://api.github.com/users/czabo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/czabo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czabo/subscriptions",
"organizations_url": "https://api.github.com/users/czabo/orgs",
"repos_url": "https://api.github.com/users/czabo/repos",
"events_url": "https://api.github.com/users/czabo/events{/privacy}",
"received_events_url": "https://api.github.com/users/czabo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | {
"login": "czabo",
"id": 75574105,
"node_id": "MDQ6VXNlcjc1NTc0MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/75574105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czabo",
"html_url": "https://github.com/czabo",
"followers_url": "https://api.github.com/users/czabo/followers",
"following_url": "https://api.github.com/users/czabo/following{/other_user}",
"gists_url": "https://api.github.com/users/czabo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/czabo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czabo/subscriptions",
"organizations_url": "https://api.github.com/users/czabo/orgs",
"repos_url": "https://api.github.com/users/czabo/repos",
"events_url": "https://api.github.com/users/czabo/events{/privacy}",
"received_events_url": "https://api.github.com/users/czabo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "czabo",
"id": 75574105,
"node_id": "MDQ6VXNlcjc1NTc0MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/75574105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czabo",
"html_url": "https://github.com/czabo",
"followers_url": "https://api.github.com/users/czabo/followers",
"following_url": "https://api.github.com/users/czabo/following{/other_user}",
"gists_url": "https://api.github.com/users/czabo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/czabo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czabo/subscriptions",
"organizations_url": "https://api.github.com/users/czabo/orgs",
"repos_url": "https://api.github.com/users/czabo/repos",
"events_url": "https://api.github.com/users/czabo/events{/privacy}",
"received_events_url": "https://api.github.com/users/czabo/received_events",
"type": "User",
"site_admin": false
}
] | [
"This was implemented in https://github.com/huggingface/transformers/pull/10950"
] | 1,608 | 1,631 | 1,631 | NONE | null | # 🌟 New model addition
## Model description
This issue is adding the Vision Transformer model described in the [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) paper. If you have any feedback and/or further ideas for the implementation please don't hesitate to mention them.
## Open source status
* [ x] the model implementation is available:
[The official github repo](https://github.com/google-research/vision_transformer) provides the implementation in Jax/Flax.
* [x ] the model weights are available:
See the github repo above.
* [ x] who are the authors: (mention them, if possible by @gh-username)
Google Research, Brain Team
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9276/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9275 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9275/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9275/comments | https://api.github.com/repos/huggingface/transformers/issues/9275/events | https://github.com/huggingface/transformers/issues/9275 | 773,647,024 | MDU6SXNzdWU3NzM2NDcwMjQ= | 9,275 | Disable progress bar for Trainer | {
"login": "Nickil21",
"id": 8767964,
"node_id": "MDQ6VXNlcjg3Njc5NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8767964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nickil21",
"html_url": "https://github.com/Nickil21",
"followers_url": "https://api.github.com/users/Nickil21/followers",
"following_url": "https://api.github.com/users/Nickil21/following{/other_user}",
"gists_url": "https://api.github.com/users/Nickil21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nickil21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nickil21/subscriptions",
"organizations_url": "https://api.github.com/users/Nickil21/orgs",
"repos_url": "https://api.github.com/users/Nickil21/repos",
"events_url": "https://api.github.com/users/Nickil21/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nickil21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you set `disable_tqdm=False` in your `TrainingArguments`, you shouldn't have any progress bar from the library.",
"Well, I think you meant `disable_tqdm=True`. By the way, the following worked:\r\n\r\n`args = TrainingArguments(disable_tqdm=True, output_dir=\"tmp_trainer\")`\r\n\r\nI am still getting progress bars for the `dataset.map()` though. Is there something like `verbose=False`?",
"`dataset.map` comes from the Datasets library, not Transformers. So you should open an issue there for this part :-)"
] | 1,608 | 1,608 | 1,608 | NONE | null | I am referencing a similar code compared to [run_glue.py]
(https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py).
I am running a preprocessing function that does the tokenization of text as well as `trainer.predict` on a pandas dataframe. How do I disable the progress bar from showing the progress made on each row of the dataframe? I thought it would work by disabling logging as well as `tqdm`, but it is not the case here. #3050 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9275/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9275/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9274 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9274/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9274/comments | https://api.github.com/repos/huggingface/transformers/issues/9274/events | https://github.com/huggingface/transformers/issues/9274 | 773,645,602 | MDU6SXNzdWU3NzM2NDU2MDI= | 9,274 | Loss printed by tensorflow fit() differs from loss using custom loop for RoBERTa | {
"login": "brand17",
"id": 36546021,
"node_id": "MDQ6VXNlcjM2NTQ2MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/36546021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brand17",
"html_url": "https://github.com/brand17",
"followers_url": "https://api.github.com/users/brand17/followers",
"following_url": "https://api.github.com/users/brand17/following{/other_user}",
"gists_url": "https://api.github.com/users/brand17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brand17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brand17/subscriptions",
"organizations_url": "https://api.github.com/users/brand17/orgs",
"repos_url": "https://api.github.com/users/brand17/repos",
"events_url": "https://api.github.com/users/brand17/events{/privacy}",
"received_events_url": "https://api.github.com/users/brand17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Windows 10
- Python version: 3.6
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@jplu, @LysandreJik
## Information
Model I am using (Bert, XLNet ...): RoBERTa
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the following script
2. The printed out losses are different
```
import tensorflow as tf
from transformers import RobertaConfig, TFRobertaMainLayer
# 1. Create a class to be able to use fit()
class Transformer(tf.keras.Model):
def __init__(self):
super(Transformer, self).__init__()
config = RobertaConfig(
vocab_size=100,
hidden_size=128,
intermediate_size=128,
max_position_embeddings=514,
num_attention_heads=8,
num_hidden_layers=6,
type_vocab_size=1,
)
self.encoder = TFRobertaMainLayer(config)
def call(self, inp, training=False):
return self.encoder(inp)[0]
model = Transformer()
# 2. Calculating loss manually for dummy input
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
x = tf.constant([[1, 0]])
y_true = tf.constant([[1, 0]])
y_pred = model((x, x))
loss = loss_fn(y_true, y_pred)
print(loss) # printing 4.8093767
# 3. Run fit()
model.compile(loss=loss_fn)
model.fit((x, x), y_true) # printing 4.7854
```
## Expected behavior
The losses should be equal. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9274/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9273 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9273/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9273/comments | https://api.github.com/repos/huggingface/transformers/issues/9273/events | https://github.com/huggingface/transformers/pull/9273 | 773,632,876 | MDExOlB1bGxSZXF1ZXN0NTQ0Njg0ODE4 | 9,273 | Fix param error | {
"login": "xu-song",
"id": 13825126,
"node_id": "MDQ6VXNlcjEzODI1MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13825126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xu-song",
"html_url": "https://github.com/xu-song",
"followers_url": "https://api.github.com/users/xu-song/followers",
"following_url": "https://api.github.com/users/xu-song/following{/other_user}",
"gists_url": "https://api.github.com/users/xu-song/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xu-song/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xu-song/subscriptions",
"organizations_url": "https://api.github.com/users/xu-song/orgs",
"repos_url": "https://api.github.com/users/xu-song/repos",
"events_url": "https://api.github.com/users/xu-song/events{/privacy}",
"received_events_url": "https://api.github.com/users/xu-song/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null |
# What does this PR do?
Fixes error
```
TypeError: forward() got an unexpected keyword argument 'token_type_ids'
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9273/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9273",
"html_url": "https://github.com/huggingface/transformers/pull/9273",
"diff_url": "https://github.com/huggingface/transformers/pull/9273.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9273.patch",
"merged_at": 1608719698000
} |
https://api.github.com/repos/huggingface/transformers/issues/9272 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9272/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9272/comments | https://api.github.com/repos/huggingface/transformers/issues/9272/events | https://github.com/huggingface/transformers/pull/9272 | 773,564,855 | MDExOlB1bGxSZXF1ZXN0NTQ0NjMzNTI4 | 9,272 | Fix gpt2 document | {
"login": "xu-song",
"id": 13825126,
"node_id": "MDQ6VXNlcjEzODI1MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13825126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xu-song",
"html_url": "https://github.com/xu-song",
"followers_url": "https://api.github.com/users/xu-song/followers",
"following_url": "https://api.github.com/users/xu-song/following{/other_user}",
"gists_url": "https://api.github.com/users/xu-song/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xu-song/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xu-song/subscriptions",
"organizations_url": "https://api.github.com/users/xu-song/orgs",
"repos_url": "https://api.github.com/users/xu-song/repos",
"events_url": "https://api.github.com/users/xu-song/events{/privacy}",
"received_events_url": "https://api.github.com/users/xu-song/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Fixes gpt2 document error.
```
AttributeError: 'GPT2DoubleHeadsModelOutput' object has no attribute 'lm_logits'
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9272/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9272",
"html_url": "https://github.com/huggingface/transformers/pull/9272",
"diff_url": "https://github.com/huggingface/transformers/pull/9272.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9272.patch",
"merged_at": 1608719656000
} |
https://api.github.com/repos/huggingface/transformers/issues/9271 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9271/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9271/comments | https://api.github.com/repos/huggingface/transformers/issues/9271/events | https://github.com/huggingface/transformers/pull/9271 | 773,507,104 | MDExOlB1bGxSZXF1ZXN0NTQ0NTg2MTYy | 9,271 | allow integer device for BatchEncoding | {
"login": "jethrokuan",
"id": 1667473,
"node_id": "MDQ6VXNlcjE2Njc0NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1667473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jethrokuan",
"html_url": "https://github.com/jethrokuan",
"followers_url": "https://api.github.com/users/jethrokuan/followers",
"following_url": "https://api.github.com/users/jethrokuan/following{/other_user}",
"gists_url": "https://api.github.com/users/jethrokuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jethrokuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jethrokuan/subscriptions",
"organizations_url": "https://api.github.com/users/jethrokuan/orgs",
"repos_url": "https://api.github.com/users/jethrokuan/repos",
"events_url": "https://api.github.com/users/jethrokuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jethrokuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Fixes #9244
I'm not fully aware of the details behind the Apex guard, in the method, so maybe this is not the solution.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
tokenizers: @mfuntowicz | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9271/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9271",
"html_url": "https://github.com/huggingface/transformers/pull/9271",
"diff_url": "https://github.com/huggingface/transformers/pull/9271.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9271.patch",
"merged_at": 1608796917000
} |
https://api.github.com/repos/huggingface/transformers/issues/9270 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9270/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9270/comments | https://api.github.com/repos/huggingface/transformers/issues/9270/events | https://github.com/huggingface/transformers/issues/9270 | 773,413,234 | MDU6SXNzdWU3NzM0MTMyMzQ= | 9,270 | how can I change the AlbertModel's vocab | {
"login": "aLowMagic",
"id": 26134992,
"node_id": "MDQ6VXNlcjI2MTM0OTky",
"avatar_url": "https://avatars.githubusercontent.com/u/26134992?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aLowMagic",
"html_url": "https://github.com/aLowMagic",
"followers_url": "https://api.github.com/users/aLowMagic/followers",
"following_url": "https://api.github.com/users/aLowMagic/following{/other_user}",
"gists_url": "https://api.github.com/users/aLowMagic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aLowMagic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aLowMagic/subscriptions",
"organizations_url": "https://api.github.com/users/aLowMagic/orgs",
"repos_url": "https://api.github.com/users/aLowMagic/repos",
"events_url": "https://api.github.com/users/aLowMagic/events{/privacy}",
"received_events_url": "https://api.github.com/users/aLowMagic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This should help: https://github.com/huggingface/transformers/issues/1413#issuecomment-538083512",
"Tanks a lot."
] | 1,608 | 1,608 | 1,608 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
how can I change the AlbertModel's vocab
Thanks
## Motivation
I noticed that I can change the Bert's vocab by change the vocab.txt. But when I use the Albert's API, the document suggests:
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2'). So how can I change that.
Thanks
## Your contribution
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9270/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9269 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9269/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9269/comments | https://api.github.com/repos/huggingface/transformers/issues/9269/events | https://github.com/huggingface/transformers/issues/9269 | 773,325,153 | MDU6SXNzdWU3NzMzMjUxNTM= | 9,269 | Output probability from model.generate | {
"login": "tomdzh",
"id": 50083108,
"node_id": "MDQ6VXNlcjUwMDgzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/50083108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomdzh",
"html_url": "https://github.com/tomdzh",
"followers_url": "https://api.github.com/users/tomdzh/followers",
"following_url": "https://api.github.com/users/tomdzh/following{/other_user}",
"gists_url": "https://api.github.com/users/tomdzh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomdzh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomdzh/subscriptions",
"organizations_url": "https://api.github.com/users/tomdzh/orgs",
"repos_url": "https://api.github.com/users/tomdzh/repos",
"events_url": "https://api.github.com/users/tomdzh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomdzh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You'll have it soon 😉 , once #9150 is merged ",
"> You'll have it soon 😉 , once #9150 is merged\r\n\r\nThat's awesome. Thanks!",
"@patil-suraj great, it got merged but how does translates now to your question_generation repo? How do I get the output probability/confidence score to the the predicted answers?",
"I only found this comment from you https://discuss.huggingface.co/t/text-generation-pipeline-output-scores-parameter/3294/2: \r\n\r\n*the text-generation pipeline doesn’t return scores, however you could the generate method directly, to get the scores, this should help*\r\n\r\nWould be great if you could elaborate on that.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@patil-suraj Any chance that `generate_tf_utils` will get the same functionality? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,622 | 1,622 | NONE | null | # 🚀 Feature request
Do we have the option to output the probability of the generated sequence from model.generate function? It will be super useful for evaluating the confidence score of the generated sequence. Thanks so much! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9269/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9268 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9268/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9268/comments | https://api.github.com/repos/huggingface/transformers/issues/9268/events | https://github.com/huggingface/transformers/issues/9268 | 773,283,992 | MDU6SXNzdWU3NzMyODM5OTI= | 9,268 | Unable to load LayoutLM from pretrained | {
"login": "brian8128",
"id": 10691563,
"node_id": "MDQ6VXNlcjEwNjkxNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/10691563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brian8128",
"html_url": "https://github.com/brian8128",
"followers_url": "https://api.github.com/users/brian8128/followers",
"following_url": "https://api.github.com/users/brian8128/following{/other_user}",
"gists_url": "https://api.github.com/users/brian8128/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brian8128/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brian8128/subscriptions",
"organizations_url": "https://api.github.com/users/brian8128/orgs",
"repos_url": "https://api.github.com/users/brian8128/repos",
"events_url": "https://api.github.com/users/brian8128/events{/privacy}",
"received_events_url": "https://api.github.com/users/brian8128/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LayoutLM only has a PyTorch implementation available. If you remove the `from_tf=True` statement, it will work.\r\n",
"I am trying to load the model weights from [here](https://huggingface.co/microsoft/layoutlm-base-uncased/tree/main#) but `from_tf=False` doesn't work either. Traceback is below.\r\n```\r\nfile_share_pre_train_model_path = \"layoutlm-base-uncased\"\r\n... config = LayoutLMConfig.from_pretrained(\r\n... os.path.join(file_share_pre_train_model_path, \"config.json\"), num_labels=len(tag_labels), cache_dir=None\r\n... )\r\n... model = LayoutLMForTokenClassification.from_pretrained(\r\n... file_share_pre_train_model_path,\r\n... from_tf=False,\r\n... config=config,\r\n... cache_dir=None,\r\n... )\r\nTraceback (most recent call last):\r\n File \"/Users/hsk/Company/environments/ner_layoutlm/lib/python3.8/site-packages/transformers/modeling_utils.py\", line 1035, in from_pretrained\r\n state_dict = torch.load(resolved_archive_file, map_location=\"cpu\")\r\n File \"/Users/hsk/Company/environments/ner_layoutlm/lib/python3.8/site-packages/torch/serialization.py\", line 527, in load\r\n with _open_zipfile_reader(f) as opened_zipfile:\r\n File \"/Users/hsk/Company/environments/ner_layoutlm/lib/python3.8/site-packages/torch/serialization.py\", line 224, in __init__\r\n super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))\r\nRuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at ../caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at ../caffe2/serialize/inline_container.cc:132)\r\nframe #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 135 (0x111e90787 in libc10.dylib)\r\nframe #1: caffe2::serialize::PyTorchStreamReader::init() + 2350 (0x119f5e14e in libtorch.dylib)\r\nframe #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 143 (0x119f5d79f in libtorch.dylib)\r\nframe #3: void pybind11::cpp_function::initialize<void pybind11::detail::initimpl::constructor<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::execute<pybind11::class_<caffe2::serialize::PyTorchStreamReader>, 0>(pybind11::class_<caffe2::serialize::PyTorchStreamReader>&)::'lambda'(pybind11::detail::value_and_holder&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >), void, pybind11::detail::value_and_holder&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<caffe2::serialize::PyTorchStreamReader>&&, (*)(0...), void pybind11::detail::initimpl::constructor<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::execute<pybind11::class_<caffe2::serialize::PyTorchStreamReader>, 0>(pybind11::class_<caffe2::serialize::PyTorchStreamReader>&)::'lambda'(pybind11::detail::value_and_holder&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) const&...)::'lambda'(pybind11::detail::function_call&)::operator()(pybind11::detail::function_call&) const + 147 (0x1113f57c3 in libtorch_python.dylib)\r\nframe #4: pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 3382 (0x110dede66 in libtorch_python.dylib)\r\nframe #5: cfunction_call_varargs + 120 (0x109b07518 in Python)\r\nframe #6: _PyObject_MakeTpCall + 373 (0x109b06f85 in Python)\r\nframe #7: method_vectorcall + 449 (0x109b0a0b1 in Python)\r\nframe #8: PyVectorcall_Call + 109 (0x109b072ad in Python)\r\nframe #9: slot_tp_init + 201 (0x109b5e619 in Python)\r\nframe #10: type_call + 297 (0x109b59a29 in Python)\r\nframe #11: _PyObject_MakeTpCall + 373 (0x109b06f85 in Python)\r\nframe #12: call_function + 533 (0x109bd5945 in Python)\r\nframe #13: _PyEval_EvalFrameDefault + 25678 (0x109bd274e in Python)\r\nframe #14: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python)\r\nframe #15: _PyFunction_Vectorcall + 270 (0x109b07a6e in Python)\r\nframe #16: _PyObject_FastCallDict + 247 (0x109b06dd7 in Python)\r\nframe #17: _PyObject_Call_Prepend + 143 (0x109b083df in Python)\r\nframe #18: slot_tp_init + 145 (0x109b5e5e1 in Python)\r\nframe #19: type_call + 297 (0x109b59a29 in Python)\r\nframe #20: _PyObject_MakeTpCall + 373 (0x109b06f85 in Python)\r\nframe #21: call_function + 533 (0x109bd5945 in Python)\r\nframe #22: _PyEval_EvalFrameDefault + 25829 (0x109bd27e5 in Python)\r\nframe #23: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python)\r\nframe #24: _PyFunction_Vectorcall + 270 (0x109b07a6e in Python)\r\nframe #25: call_function + 444 (0x109bd58ec in Python)\r\nframe #26: _PyEval_EvalFrameDefault + 25976 (0x109bd2878 in Python)\r\nframe #27: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python)\r\nframe #28: _PyFunction_Vectorcall + 270 (0x109b07a6e in Python)\r\nframe #29: method_vectorcall + 170 (0x109b09f9a in Python)\r\nframe #30: call_function + 444 (0x109bd58ec in Python)\r\nframe #31: _PyEval_EvalFrameDefault + 25976 (0x109bd2878 in Python)\r\nframe #32: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python)\r\nframe #33: PyEval_EvalCode + 100 (0x109bcc224 in Python)\r\nframe #34: builtin_exec + 626 (0x109bc9612 in Python)\r\nframe #35: cfunction_vectorcall_FASTCALL + 175 (0x109b438bf in Python)\r\nframe #36: call_function + 444 (0x109bd58ec in Python)\r\nframe #37: _PyEval_EvalFrameDefault + 25829 (0x109bd27e5 in Python)\r\nframe #38: function_code_fastcall + 128 (0x109b078d0 in Python)\r\nframe #39: call_function + 444 (0x109bd58ec in Python)\r\nframe #40: _PyEval_EvalFrameDefault + 25642 (0x109bd272a in Python)\r\nframe #41: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python)\r\nframe #42: _PyFunction_Vectorcall + 270 (0x109b07a6e in Python)\r\nframe #43: call_function + 444 (0x109bd58ec in Python)\r\nframe #44: _PyEval_EvalFrameDefault + 25642 (0x109bd272a in Python)\r\nframe #45: function_code_fastcall + 128 (0x109b078d0 in Python)\r\nframe #46: call_function + 444 (0x109bd58ec in Python)\r\nframe #47: _PyEval_EvalFrameDefault + 25642 (0x109bd272a in Python)\r\nframe #48: function_code_fastcall + 128 (0x109b078d0 in Python)\r\nframe #49: call_function + 444 (0x109bd58ec in Python)\r\nframe #50: _PyEval_EvalFrameDefault + 25642 (0x109bd272a in Python)\r\nframe #51: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python)\r\nframe #52: _PyFunction_Vectorcall + 270 (0x109b07a6e in Python)\r\nframe #53: call_function + 444 (0x109bd58ec in Python)\r\nframe #54: _PyEval_EvalFrameDefault + 25642 (0x109bd272a in Python)\r\nframe #55: function_code_fastcall + 128 (0x109b078d0 in Python)\r\nframe #56: call_function + 444 (0x109bd58ec in Python)\r\nframe #57: _PyEval_EvalFrameDefault + 25829 (0x109bd27e5 in Python)\r\nframe #58: function_code_fastcall + 128 (0x109b078d0 in Python)\r\nframe #59: call_function + 444 (0x109bd58ec in Python)\r\nframe #60: _PyEval_EvalFrameDefault + 25678 (0x109bd274e in Python)\r\nframe #61: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python)\r\nframe #62: PyEval_EvalCode + 100 (0x109bcc224 in Python)\r\nframe #63: PyRun_FileExFlags + 336 (0x109c1bed0 in Python)\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"<input>\", line 5, in <module>\r\n File \"/Users/hsk/environments/ner_layoutlm/lib/python3.8/site-packages/transformers/modeling_utils.py\", line 1037, in from_pretrained\r\n raise OSError(\r\nOSError: Unable to load weights from pytorch checkpoint file for 'layoutlm-base-uncased' at 'layoutlm-base-uncased/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. \r\n\r\n```",
"Maybe try updating to Transformers 4.1.1 (I just ran the following in a notebook and it works):\r\n\r\n```\r\n!pip install transformers\r\n\r\nfrom transformers import LayoutLMForTokenClassification\r\nmodel = LayoutLMForTokenClassification.from_pretrained('microsoft/layoutlm-base-uncased')\r\n```",
"Thanks @NielsRogge It was a pytorch version issue. I solved it after seeing [this](https://github.com/huggingface/transformers/issues/7739#issuecomment-707214148)",
"Closing then, thanks for your help @NielsRogge !"
] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
- `transformers` version: 3.3.0
- Platform: Linux-4.15.0-76-generic-x86_64-with-glibc2.10
- Python version: 3.8.2
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
### Who can help
@sgugger
@LysandreJik
## Information
When I try to load a LayoutLM model with the following script I hit an error.
```
from transformers import LayoutLMForTokenClassification
model = LayoutLMForTokenClassification.from_pretrained('microsoft/layoutlm-base-uncased', from_tf=True)
```
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
~/miniconda3/envs/ML38/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
905 if resolved_archive_file is None:
--> 906 raise EnvironmentError
907 except EnvironmentError:
OSError:
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-11-9f82aa1a161f> in <module>
----> 1 model = LayoutLMForTokenClassification.from_pretrained('microsoft/layoutlm-base-uncased', from_tf=True)
~/miniconda3/envs/ML38/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
911 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {WEIGHTS_NAME}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME}.\n\n"
912 )
--> 913 raise EnvironmentError(msg)
914
915 if resolved_archive_file == archive_file:
OSError: Can't load weights for 'microsoft/layoutlm-base-uncased'. Make sure that:
- 'microsoft/layoutlm-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'microsoft/layoutlm-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9268/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9267 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9267/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9267/comments | https://api.github.com/repos/huggingface/transformers/issues/9267/events | https://github.com/huggingface/transformers/issues/9267 | 773,277,672 | MDU6SXNzdWU3NzMyNzc2NzI= | 9,267 | [hf args] shouldn't match partial arg names | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This looks like it's actually an intended behavior of `ArgumentParser`: see [here](https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.parse_known_args) and in general of `argparse` (see [here](https://docs.python.org/3/library/argparse.html#prefix-matching)). So I don't think it categorizes as a bug, even if it should be documented.",
"Oh, fantastic - thank you for finding that out, @sgugger \r\n\r\nThis feature surely bit us yesterday."
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | For `--label_smoothing_factor` I can pass `--label_smoothing` and it still works - which is a bug, as it should do a full match and not a substring. This is with master.
context: finetune_trainer just switched from `--label_smoothing` to `--label_smoothing_factor` (different functionality) and we were puzzling over why `--label_smoothing` still worked.
this is definitely not urgent
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9267/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9266 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9266/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9266/comments | https://api.github.com/repos/huggingface/transformers/issues/9266/events | https://github.com/huggingface/transformers/pull/9266 | 773,268,347 | MDExOlB1bGxSZXF1ZXN0NTQ0Mzg2NDY2 | 9,266 | Minor documentation revisions from copyediting | {
"login": "connorbrinton",
"id": 1848731,
"node_id": "MDQ6VXNlcjE4NDg3MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1848731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connorbrinton",
"html_url": "https://github.com/connorbrinton",
"followers_url": "https://api.github.com/users/connorbrinton/followers",
"following_url": "https://api.github.com/users/connorbrinton/following{/other_user}",
"gists_url": "https://api.github.com/users/connorbrinton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connorbrinton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connorbrinton/subscriptions",
"organizations_url": "https://api.github.com/users/connorbrinton/orgs",
"repos_url": "https://api.github.com/users/connorbrinton/repos",
"events_url": "https://api.github.com/users/connorbrinton/events{/privacy}",
"received_events_url": "https://api.github.com/users/connorbrinton/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks like you need to run `make style` on your branch to fix the formatting of the doc files. Let me know if you run into any trouble doing that.",
"Thanks @sgugger 😄 I was able to run `make style` successfully (and update `preprocessing.rst` with the changes), but it looks like the `check_code_quality` check ran out of memory this time 😅 \r\n\r\n\r\n\r\nI tried rerunning it, but it seems like I don't have permission. Could you try rerunning it?\r\n\r\nI'm also happy to bump the `resource_class` for the check from `medium` to `medium+` if that would be helpful 🙂 ",
"Yes it was just a spurious failure. Thanks!"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Minor changes to the documentation to correct typos and improve readability. I noticed these typos while reading through the docs to familiarize myself with the library for a project, and thought it would be nice to make a PR for them 😊
I've already tested building the docs from these changes, and all changes seem to have taken effect properly 👍
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
## Who can review?
@sgugger would you mind reviewing this PR? I'm happy to make any changes (or remove any changes) you want 🙂 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9266/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9266",
"html_url": "https://github.com/huggingface/transformers/pull/9266",
"diff_url": "https://github.com/huggingface/transformers/pull/9266.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9266.patch",
"merged_at": 1608736550000
} |
https://api.github.com/repos/huggingface/transformers/issues/9265 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9265/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9265/comments | https://api.github.com/repos/huggingface/transformers/issues/9265/events | https://github.com/huggingface/transformers/issues/9265 | 773,236,931 | MDU6SXNzdWU3NzMyMzY5MzE= | 9,265 | [finetune_trainer] max length cl args redesign | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After giving this some thought, given the fact there are two different lengths here (input and targets) I would propose keeping `--max_source_length` and `--max_target_length` to avoid any confusion for the user. In the same vein, `run_qa.py` contains `max_seq_length` and `max_answer_length` to clearly differentiate the two.\r\n\r\nAs for val/test I don't have any strong opinion, apart from the fact they are not used properly in the prediction at the end (only for the preprocessing), so they should be collapsed IMO",
"This works for me!\r\n\r\nI especially would like to see all examples use same cl args for the same functionality.",
"Works for me too,\r\n\r\nAnd for val/test targets lengths, IMO we can collapse it into one single `max_generate_length` or `eval_max_length` since most of the users (and the example scripts as well) use the same value for both args",
"Having only `max_source_length` and `max_target_length` works for me!",
"regarding the `val_max_target_length` and `test_max_target_length` args\r\n\r\nThe reason we (I and Sam) decided to add that\r\n\r\n- in general, it’s okay to have a bit smaller max target length for training/validation because some documents could be too long than avg length, it’s okay during training if these get truncated\r\n- for the test, we should set the max target length to be as long as the longest text in the test set so it won’t get truncated. The reason is if the text in the test set is truncated then the calculated metrics won’t be accurate.\r\n\r\nAlso, we should mention in the readme that it's best to use `run_eval.py` for calculating metrics. As there is an issue when calculating BLUE score this way as outlined in #9161",
"> * for the test, we should set the max target length to be as long as the longest text in the test set so it won’t get truncated. The reason is if the text in the test set is truncated then the calculated metrics won’t be accurate.\r\n\r\nWhy not compute this in the script then? That would avoid having an argument that is half-used.",
"What I'm hearing is that perhaps there was a concrete situation where the training needed a shorter max length than eval/test. So @sgugger's suggestion will solve your concern that the scoring is done on the full length, but not if for some reason the training stage should use shorter sequences. \r\n\r\nSo we have 2 related situations and I'm not sure @sgugger's solution covers the 2nd one. I just don't know whether it's a real use case or a may be.\r\n\r\nPlease let me know if I haven't explained myself clearly.\r\n",
"Thinking more about it, @patil-suraj, won't fixing up `generate`'s `max_length` to match the longest max length of the test dataset lead to better scores than what they will be otherwise? And thus provide misleading results? \r\n\r\nFor example, let's take translation, the best model would do the most correct translation regardless of whether it is allowed to generate much longer sequences. So if we calculate any such max length dynamically to be fair I think there needs to be added some extra length beyond the longest test sequence. Say `max(len(tokenize(test_inputs)))*1.1`? Does it make sense?\r\n\r\nSo perhaps this is how it should work. \r\n- `max_target_length` is for training\r\n- for eval/train we derive max length from the input data plus some margin?",
"What I meant here is that the `test_max_target_length` is also passed to the dataset, and the dataset then truncates the reference targets (translations, summaries) longer than that. So later the generated targets are compared with (possibly) truncated references which will result in incorrect metrics ",
"You're absolutely correct - that doesn't sound right.\r\n\r\nSo before we can discuss the flags we then need to first discuss the algorithm, otherwise we won't get anywhere.\r\n\r\nIf in order to get the correct metrics we must not truncate the val/test datasets then why are we doing that in the current code?\r\n\r\nPerhaps what I suggested at the end of https://github.com/huggingface/transformers/issues/9265#issuecomment-750423347 is a better way to approach it?\r\n\r\nAlso please don't forget that max_length is used to deal with OOM limitations",
"This has been resolved."
] | 1,608 | 1,616 | 1,616 | CONTRIBUTOR | null | Splitting of from https://github.com/huggingface/transformers/pull/9241, it's been proposed to refactor the following 4 cl args of `finetune_trainer.py`:
1. `--max_source_length`
2. `--max_target_length`
3. `--val_max_target_length`
4. `--test_max_target_length`
https://github.com/huggingface/transformers/blob/f38c4ad302dd34faef1137b13e9636f4408b0462/examples/seq2seq/finetune_trainer.py#L82-L110
There are multiple comments wrt this in https://github.com/huggingface/transformers/pull/9241, especially towards the end of it
Let's redesign it here and then do a single breaking change to this probably in the new year.
To summarize the main suggestions so far were:
1. to perhaps remove `--max_source_length` - but we need use cases to see whether this is safe to do
2. collapse cl args 2-4 into a single `max_length` arg to match `generate`'s API.
@sgugger, @patrickvonplaten, @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9265/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9264 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9264/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9264/comments | https://api.github.com/repos/huggingface/transformers/issues/9264/events | https://github.com/huggingface/transformers/issues/9264 | 773,195,058 | MDU6SXNzdWU3NzMxOTUwNTg= | 9,264 | compute_metrics in the trainer does not seem to be extensible | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If I understand correctly you raise 2 unrelated issues:\r\n\r\n1. there is only one place where `compute_metrics` is set and perhaps it needs to be changed through the life of the trainer object?\r\n\r\nSince you can always override it with:\r\n```\r\ntrainer.compute_metrics = new_compute_metrics\r\n```\r\nwhen you need to switch it to another version, so in a pinch you can do that. But clearly this is not a public API at the moment and can change at any time.\r\n\r\nPerhaps all is needed is a setable accessor for the `compute_metrics` attribute, so that a user can use it to swap in a new function at will, rather than adding new arguments?\r\n\r\n2. You're saying that users may need to pass more args to `compute_metrics`, but it's not possible.\r\n\r\nYou can do that via a closure mechanism, e.g. how it's done here:\r\nhttps://github.com/huggingface/transformers/blob/cbe63949d76efd153a1f389f38fe9ce1287e06b0/examples/seq2seq/utils.py#L80\r\nso you build your `compute_metrics` on the fly, getting whatever data you need into the closure function and then it'll have access to whatever other data you may need at run time.\r\n\r\nHere is a silly example:\r\n```\r\ndef make_compute_metrics():\r\n extra_input = 1\r\n def compute_metrics(pred):\r\n print(f\"Look ma, I can pass my own args: {extra_input}\")\r\n return compute_metrics\r\n\r\ntrainer.compute_metrics = make_compute_metrics()\r\n\r\n# and then some time later in `prediction_loop`:\r\nself.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n# calls your created on the fly function with whatever other data you want to be seen from it.\r\n```\r\n\r\nSo your custom `compute_metrics_fn` now can access whatever other data you want besides the `EvalPrediction` object.\r\n\r\n\r\n\r\n",
"Hi there\nthank you for the response, yes, I agree this is possible, I solved this\nwith `functools.partial`, but still I think the better design would be to\nallow the user add extra parameters. so this was more feature request.\nPlease feel free to ignore if this does not make sense.\nthanks\nBest\nRabeeh\n\n\nOn Tue, Dec 22, 2020 at 11:12 PM Stas Bekman <[email protected]>\nwrote:\n\n> If I understand correctly you raise 2 unrelated issues:\n>\n> 1. there is only one place where compute_metrics is set and perhaps it\n> needs to be changed through the life of the trainer object?\n>\n> Since you can always override it with:\n>\n> trainer.compute_metrics = new_compute_metrics\n>\n> when you need to switch it to another version, so in a pinch you can do\n> that.\n>\n> Perhaps all is needed is an setable accessor for the compute_metrics\n> attribute, so that a user can use to swap in a new function at will, rather\n> than adding new arguments?\n>\n> 1. You're saying that users may need to pass more args to\n> compute_metrics, you can do that via closure, e.g. how it's done here:\n>\n> https://github.com/huggingface/transformers/blob/cbe63949d76efd153a1f389f38fe9ce1287e06b0/examples/seq2seq/utils.py#L80\n> so you build your compute_metrics on the fly, getting whatever data\n> you need into the closure function and then it'll have access to whatever\n> other data you may need at run time.\n>\n> Here is a silly example:\n>\n> def make_compute_metrics():\n> extra_input = 1\n> def compute_metrics(pred):\n> print(f\"Look ma, I can pass my own args: {extra_input}\")\n> return compute_metrics\n>\n> compute_metrics_fn = make_compute_metrics()\n> compute_metrics_fn()\n>\n> So your custom compute_metrics_fn now can access whatever other data you\n> want besides the EvalPrediction object.\n>\n> As I have shown in (1) you can now assign this to trainer.compute_metrics.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/9264#issuecomment-749827774>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHH2C7MLFOY6V3MWST5LSWERUDANCNFSM4VGBUKSQ>\n> .\n>\n",
"Yes, `partial` would do the trick.\r\n\r\nI've just shared my take on it. and that there might be a need for a public API to override `trainer.compute_metrics` post-`__init__`, \r\n\r\nIn my limited experience `partial` or a manual closure is how some projects implement such functions.\r\n\r\nI will let others comment though on this feature request.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | NONE | null | Hi,
This is more feature request, looking into compute_metrics function defined below:
https://github.com/huggingface/transformers/blob/c89bdfbe720bc8f41c7dc6db5473a2cb0955f224/src/transformers/trainer.py#L204
to me this looks like the design does not allow user easy modification for different applications or I am missing something, please find the explanations below:
Lets assume the user has multiple tasks like in T5 and each task needs multiple different evaluation metrics, which needs to be generated on the fly, then since this function does not accept any more arguments than `EvalPrediction`, this would not allow user to pass further parameters to generate the final evaluation metric on the fly, I appreciate modifying the design to allow easy modifications.
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9264/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9263 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9263/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9263/comments | https://api.github.com/repos/huggingface/transformers/issues/9263/events | https://github.com/huggingface/transformers/pull/9263 | 773,190,366 | MDExOlB1bGxSZXF1ZXN0NTQ0MzE5ODQ5 | 9,263 | Adds MuRIL - BERT based model for 17 Indian Languages to the library | {
"login": "ravi03071991",
"id": 12198101,
"node_id": "MDQ6VXNlcjEyMTk4MTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/12198101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ravi03071991",
"html_url": "https://github.com/ravi03071991",
"followers_url": "https://api.github.com/users/ravi03071991/followers",
"following_url": "https://api.github.com/users/ravi03071991/following{/other_user}",
"gists_url": "https://api.github.com/users/ravi03071991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ravi03071991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ravi03071991/subscriptions",
"organizations_url": "https://api.github.com/users/ravi03071991/orgs",
"repos_url": "https://api.github.com/users/ravi03071991/repos",
"events_url": "https://api.github.com/users/ravi03071991/events{/privacy}",
"received_events_url": "https://api.github.com/users/ravi03071991/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @ravi03071991, \r\n\r\nthanks a lot for the new model! There are quite some empty files in the PR - can we maybe delete those?",
"From https://tfhub.dev/google/MuRIL/1 it seems that MuRIL is the same as BERT - do we need a new model class? It would be awesome if you could specify the differences between MuRIL and BERT in this PR :-) ",
"> Hey @ravi03071991,\r\n> \r\n> thanks a lot for the new model! There are quite some empty files in the PR - can we maybe delete those?\r\n\r\nSure. We can delete them.",
"- I've posted an adapted MuRIL BERT model here https://huggingface.co/monsoon-nlp/muril-adapted-local\r\n- Simran Khanuja has posted here https://huggingface.co/simran-kh/muril-cased-temp\r\n- there is also https://huggingface.co/google/muril-cased/tree/main but it has no model files\r\n\r\nDoes this do the job?",
"Hey @ravi03071991 and @mapmeld, \r\n\r\nSo what I understand is that the model can be used with **no** code addition using `BertModel` and `BertTokenizer` - is this correct? I think in this case it does make more sense to just add a model to the model hub as it's done with this checkpoint: https://huggingface.co/monsoon-nlp/muril-adapted-local/tree/main \r\n\r\nDid you guys check whether the model works as expected? We could do some quick fine-tuning evaluation on the XTREME benchmark to make sure the model behaves correctly in transformers. We should get more or less the same results as shown on the tf-hub: https://tfhub.dev/google/MuRIL/1 . I think it'll be pretty easy to do some fine-tuning / evaluation by slightly adapting this notebooks: https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb to use the XTREME dataset from `datasets`: https://huggingface.co/datasets/xtreme . Would someone be interested in giving it a shot at making such a notebook?\r\n\r\nI think with such a notebook, we can upload the pre-trained checkpoint to an \"official\" org name in the hub - probably `google/muril-bert-base` or something (google trained the model no?). Then we're happy to do some promotion on the model as well :-) ",
"> Hey @ravi03071991 and @mapmeld,\r\n> \r\n> So what I understand is that the model can be used with **no** code addition using `BertModel` and `BertTokenizer` - is this correct? I think in this case it does make more sense to just add a model to the model hub as it's done with this checkpoint: https://huggingface.co/monsoon-nlp/muril-adapted-local/tree/main\r\n> \r\n> Did you guys check whether the model works as expected? We could do some quick fine-tuning evaluation on the XTREME benchmark to make sure the model behaves correctly in transformers. We should get more or less the same results as shown on the tf-hub: https://tfhub.dev/google/MuRIL/1 . I think it'll be pretty easy to do some fine-tuning / evaluation by slightly adapting this notebooks: https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb to use the XTREME dataset from `datasets`: https://huggingface.co/datasets/xtreme . Would someone be interested in giving it a shot at making such a notebook?\r\n> \r\n> I think with such a notebook, we can upload the pre-trained checkpoint to an \"official\" org name in the hub - probably `google/muril-bert-base` or something (google trained the model no?). Then we're happy to do some promotion on the model as well :-)\r\n\r\n Sure. I am can take up the task of making the notebook.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
This PR adds a TensorFlow-based MuRIL model to the library. More details about the MuRIL model can be found [here](https://tfhub.dev/google/MuRIL/1).
Fixes: #9190
@LysandreJik @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9263/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9263",
"html_url": "https://github.com/huggingface/transformers/pull/9263",
"diff_url": "https://github.com/huggingface/transformers/pull/9263.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9263.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9262 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9262/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9262/comments | https://api.github.com/repos/huggingface/transformers/issues/9262/events | https://github.com/huggingface/transformers/pull/9262 | 773,181,711 | MDExOlB1bGxSZXF1ZXN0NTQ0MzEyNzY2 | 9,262 | Revert renaming in finetune_trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
As per the discussion in #9241, reverting all renaming in the `finetune_trainer.py` script for now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9262/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9262",
"html_url": "https://github.com/huggingface/transformers/pull/9262",
"diff_url": "https://github.com/huggingface/transformers/pull/9262.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9262.patch",
"merged_at": 1608669754000
} |
https://api.github.com/repos/huggingface/transformers/issues/9261 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9261/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9261/comments | https://api.github.com/repos/huggingface/transformers/issues/9261/events | https://github.com/huggingface/transformers/issues/9261 | 773,159,927 | MDU6SXNzdWU3NzMxNTk5Mjc= | 9,261 | [seq2seq] memory regression | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, we really should take a stab at better speed and memory regression testing. Big new years resolution!",
"This specific commit introduced the regression:\r\nhttps://github.com/huggingface/transformers/pull/9241/commits/fe7960bcbe0183d198661e1c05d82ed7ff118e18\r\n",
"There is a second problem:\r\n\r\nSame as above but with apex:\r\n```\r\n--label_smoothing 0.1 --fp16 --fp16_backend apex\r\n```\r\nhangs 5% into training - spinning CPU (not OOMing) - had to kill.\r\n\r\nchecked pre this PR - no hanging.\r\n\r\nFull command:\r\n```\r\nexport BS=12; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp --label_smoothing 0.1 --fp16 --fp16_backend apex\r\n```\r\n\r\n(It OOMs some time later into training) but no hanging.",
"So both problem seem to be related to label-smoothing, @sgugger has been testing hypotheses and this one worked:\r\n```\r\n# trainer.py (top)\r\ndef label_smoothed_nll_loss(lprobs, target, epsilon, ignore_index=-100):\r\n \"\"\"From fairseq\"\"\"\r\n if target.dim() == lprobs.dim() - 1:\r\n target = target.unsqueeze(-1)\r\n nll_loss = -lprobs.gather(dim=-1, index=target)\r\n smooth_loss = -lprobs.sum(dim=-1, keepdim=True)\r\n if ignore_index is not None:\r\n pad_mask = target.eq(ignore_index)\r\n nll_loss.masked_fill_(pad_mask, 0.0)\r\n smooth_loss.masked_fill_(pad_mask, 0.0)\r\n else:\r\n nll_loss = nll_loss.squeeze(-1)\r\n smooth_loss = smooth_loss.squeeze(-1)\r\n nll_loss = nll_loss.sum() # mean()? Scared to break other math.\r\n smooth_loss = smooth_loss.sum()\r\n eps_i = epsilon / lprobs.size(-1)\r\n loss = (1.0 - epsilon) * nll_loss + eps_i * smooth_loss\r\n return loss, nll_loss\r\n```\r\n\r\n```\r\n # trainer.py (in Trainer class)\r\n def compute_loss(self, model, inputs):\r\n labels = inputs.pop(\"labels\")\r\n logits = model(**inputs)[0]\r\n return label_smoothed_nll_loss(logits.view(-1, logits.shape[-1]), labels.view(-1), self.args.label_smoothing_factor)[0]\r\n```\r\n\r\n**edit** @sgugger says that this code wasn't right, so we currently don't have a solution yet. will keep on experimenting.",
"Hi.\r\nrelated to this bug, is my bug report here https://github.com/huggingface/transformers/issues/9311 \r\nIs there an alternative allowing me to move forward resolving memory issue for now? thanks",
"Well, I don't think it's related other than both using up more RAM ;) This regression happened in a very recent change, but you're using a much older transformers version. \r\n\r\nI will follow up in your Issue you linked to.\r\n",
"So `--fp16` seems to be related, if I remove it the regression goes away."
] | 1,608 | 1,611 | 1,611 | CONTRIBUTOR | null | #9241 introduced a memory regression - found out via git bisect.
I was able to do: BS=12 before this PR got merged and now only BS=8 with:
```
export BS=12; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp --fp16
```
We really need to go back to that issue of memory benchmarks in CI and figure out how to make it happen.
The problem is that I started working on it some months back but gave up since each gpu gave different numbers...
For details please see: https://github.com/huggingface/transformers/issues/6045
edit: should also make sure that `--label_smoothing 0.1 --fp16 --fp16_backend apex` works https://github.com/huggingface/transformers/issues/9261#issuecomment-749800880
@patrickvonplaten, should we figure this out in the new year? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9261/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9260 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9260/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9260/comments | https://api.github.com/repos/huggingface/transformers/issues/9260/events | https://github.com/huggingface/transformers/pull/9260 | 773,097,487 | MDExOlB1bGxSZXF1ZXN0NTQ0MjQ0NTc1 | 9,260 | Add speed metrics to all example scripts + template | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The eval metrics are already reported with the other metrics, so need to add anything for them. Not sure about the refactor since this shouldn't really be a function in transformers (nothing to do with transformers models) so we would have to define it in every one of those scripts, which kind of takes the same length.",
"re: eval/train - except n_objs is missing from metrics - remember we had to add it separately in `finetune_trainer` when you refactored it?\r\n\r\nre: refactor: I haven't suggested anything for the core - we have utils.py for that.",
"Yes, but this is something I just did in `finetune_trainer` to have it output the same things as before, for other scripts I don't want that reported several times (it's already logged at the beginning of training/evaluation)."
] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
This does the same as #9198 but on all examples scripts and the example template. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9260/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9260",
"html_url": "https://github.com/huggingface/transformers/pull/9260",
"diff_url": "https://github.com/huggingface/transformers/pull/9260.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9260.patch",
"merged_at": 1608663746000
} |
https://api.github.com/repos/huggingface/transformers/issues/9259 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9259/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9259/comments | https://api.github.com/repos/huggingface/transformers/issues/9259/events | https://github.com/huggingface/transformers/pull/9259 | 773,045,330 | MDExOlB1bGxSZXF1ZXN0NTQ0MjA0NDIw | 9,259 | Fix script that check objects are documented | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
Currently, the script that checks objects in the main init are not documented is not really running cause I'm stupid and forgot a pair of `()`...
This PR fixes that and adds the objects introduces without documentation in their proper place. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9259/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9259",
"html_url": "https://github.com/huggingface/transformers/pull/9259",
"diff_url": "https://github.com/huggingface/transformers/pull/9259.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9259.patch",
"merged_at": 1608653579000
} |
https://api.github.com/repos/huggingface/transformers/issues/9258 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9258/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9258/comments | https://api.github.com/repos/huggingface/transformers/issues/9258/events | https://github.com/huggingface/transformers/issues/9258 | 773,030,618 | MDU6SXNzdWU3NzMwMzA2MTg= | 9,258 | torch.hub colab doesn't work | {
"login": "StevenJokess",
"id": 71307974,
"node_id": "MDQ6VXNlcjcxMzA3OTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/71307974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StevenJokess",
"html_url": "https://github.com/StevenJokess",
"followers_url": "https://api.github.com/users/StevenJokess/followers",
"following_url": "https://api.github.com/users/StevenJokess/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenJokess/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StevenJokess/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenJokess/subscriptions",
"organizations_url": "https://api.github.com/users/StevenJokess/orgs",
"repos_url": "https://api.github.com/users/StevenJokess/repos",
"events_url": "https://api.github.com/users/StevenJokess/events{/privacy}",
"received_events_url": "https://api.github.com/users/StevenJokess/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | NONE | null | ERROR: type should be string, got "https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/huggingface_pytorch-transformers.ipynb#scrollTo=T_3y0655Bqbj\r\n```\r\n%%bash\r\npip install tqdm boto3 requests regex sentencepiece sacremoses\r\n```\r\n\r\nThen all cells left don't work!" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9258/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9257 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9257/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9257/comments | https://api.github.com/repos/huggingface/transformers/issues/9257/events | https://github.com/huggingface/transformers/issues/9257 | 773,023,666 | MDU6SXNzdWU3NzMwMjM2NjY= | 9,257 | Pegasus Documentation May Conflict With Seq2Seq ReadMe | {
"login": "kingpalethe",
"id": 11775831,
"node_id": "MDQ6VXNlcjExNzc1ODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/11775831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingpalethe",
"html_url": "https://github.com/kingpalethe",
"followers_url": "https://api.github.com/users/kingpalethe/followers",
"following_url": "https://api.github.com/users/kingpalethe/following{/other_user}",
"gists_url": "https://api.github.com/users/kingpalethe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingpalethe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingpalethe/subscriptions",
"organizations_url": "https://api.github.com/users/kingpalethe/orgs",
"repos_url": "https://api.github.com/users/kingpalethe/repos",
"events_url": "https://api.github.com/users/kingpalethe/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingpalethe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @kingpalethe,\r\n\r\nIn general, for BART and Marian models, training and eval is faster with fp16, except Pegasus and T5 which currently don't work well with fp16\r\n\r\nYes, the fine-tuning script is now moved under `examples/research_projects/seq2seq-distillation` dir,\r\nhttps://github.com/huggingface/transformers/tree/master/examples/research_projects/seq2seq-distillation\r\n\r\nThanks for reporting,\r\n\r\nAlso please note that this script is not maintained anymore and is provided as-is. We only maintain the `finetune_trainer.py` script now.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"#26521 @[email protected][#v0](url)",
"sio"
] | 1,608 | 1,696 | 1,614 | NONE | null | Here, under `tips and tricks`.....
https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#tips-and-tricks
`Both finetuning and eval are 30% faster with --fp16. For that you need to install apex.`
But in the documentation...
https://huggingface.co/transformers/master/model_doc/pegasus.html#examples
`FP16 is not supported (help/ideas on this appreciated!).`
Also in the documentation
https://huggingface.co/transformers/master/model_doc/pegasus.html#examples
`Script to fine-tune pegasus on the XSUM dataset.`
leads to a 404: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9257/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9256 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9256/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9256/comments | https://api.github.com/repos/huggingface/transformers/issues/9256/events | https://github.com/huggingface/transformers/pull/9256 | 773,001,989 | MDExOlB1bGxSZXF1ZXN0NTQ0MTY5ODc5 | 9,256 | [EncoderDecoder] Make tests more aggressive | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Before merging #9183, we should make sure that the EncoderDecoder and caching tests are aggressive enough to be sure everything works as expected.
In addition, this PR refactors the `_expand_mask` function in Bart making it cleaner and move the responsibility correctly to the `attention_mask` creation.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9256/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9256",
"html_url": "https://github.com/huggingface/transformers/pull/9256",
"diff_url": "https://github.com/huggingface/transformers/pull/9256.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9256.patch",
"merged_at": 1608652805000
} |
https://api.github.com/repos/huggingface/transformers/issues/9255 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9255/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9255/comments | https://api.github.com/repos/huggingface/transformers/issues/9255/events | https://github.com/huggingface/transformers/pull/9255 | 772,990,353 | MDExOlB1bGxSZXF1ZXN0NTQ0MTYwMzEw | 9,255 | Fix link to bertabs/README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9255/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9255",
"html_url": "https://github.com/huggingface/transformers/pull/9255",
"diff_url": "https://github.com/huggingface/transformers/pull/9255.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9255.patch",
"merged_at": 1608655284000
} |
https://api.github.com/repos/huggingface/transformers/issues/9254 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9254/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9254/comments | https://api.github.com/repos/huggingface/transformers/issues/9254/events | https://github.com/huggingface/transformers/pull/9254 | 772,986,122 | MDExOlB1bGxSZXF1ZXN0NTQ0MTU2ODk4 | 9,254 | Fix link to old language modeling script | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9254/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9254",
"html_url": "https://github.com/huggingface/transformers/pull/9254",
"diff_url": "https://github.com/huggingface/transformers/pull/9254.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9254.patch",
"merged_at": 1608655248000
} |
https://api.github.com/repos/huggingface/transformers/issues/9253 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9253/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9253/comments | https://api.github.com/repos/huggingface/transformers/issues/9253/events | https://github.com/huggingface/transformers/issues/9253 | 772,850,278 | MDU6SXNzdWU3NzI4NTAyNzg= | 9,253 | Prediction problem of glue task | {
"login": "anbo724",
"id": 769388,
"node_id": "MDQ6VXNlcjc2OTM4OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/769388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anbo724",
"html_url": "https://github.com/anbo724",
"followers_url": "https://api.github.com/users/anbo724/followers",
"following_url": "https://api.github.com/users/anbo724/following{/other_user}",
"gists_url": "https://api.github.com/users/anbo724/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anbo724/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anbo724/subscriptions",
"organizations_url": "https://api.github.com/users/anbo724/orgs",
"repos_url": "https://api.github.com/users/anbo724/repos",
"events_url": "https://api.github.com/users/anbo724/events{/privacy}",
"received_events_url": "https://api.github.com/users/anbo724/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have trained the glue task for mrpc, and I want to load the pretrained model and predict for new sentence pairs.\r\n\r\n```py\r\neval_dataset = load_dataset( \"json\", data_files={\"test\": \"/home/aa/paraphrase/data/qqp/tt.json\"})\r\neval_dataset = eval_dataset.map(preprocess_function, batched=False, load_from_cache_file=True)\r\nprint(eval_dataset['test']['idx'])\r\n\r\neval_dataset.remove_columns_(\"label\")\r\n\r\ntrainer = Trainer(model=model, tokenizer=tokenizer)\r\npredictions = trainer.predict(test_dataset=eval_dataset).predictions\r\nprint(predictions)\r\npredictions = np.array([softmax(element) for element in predictions])[:, 1]\r\n```\r\n\r\nAnd I got this :\r\n\r\n```\r\nload model finish\r\nUsing custom data configuration default\r\nReusing dataset json (/home/aa/.cache/huggingface/datasets/json/default-8988cd19f10ded6e/0.0.0/70d89ed4db1394f028c651589fcab6d6b28dddcabbe39d3b21b4d41f9a708514)\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 1413.70ex/s]\r\n[0, 1, 2, 3, 4, 5, 6, 7, 8]\r\nTraceback (most recent call last):\r\n File \"predict.py\", line 89, in <module>\r\n predictions = trainer.predict(test_dataset=eval_dataset).predictions\r\n File \"/home/aa/anaconda3/envs/transformer/lib/python3.7/site-packages/transformers/trainer.py\", line 1381, in predict\r\n test_dataloader, description=\"Prediction\", ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix\r\n File \"/home/aa/anaconda3/envs/transformer/lib/python3.7/site-packages/transformers/trainer.py\", line 1441, in prediction_loop\r\n for step, inputs in enumerate(dataloader):\r\n File \"/home/aa/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/utils/data/dataloader.py\", line 435, in __next__\r\n data = self._next_data()\r\n File \"/home/aa/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/utils/data/dataloader.py\", line 475, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/home/aa/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/aaanbo/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\nKeyError: 0\r\n```\r\n\r\nWhy I got keyerror?\r\nAnyone can help or show me how to use the pretrained models for sentence pair prediction?\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9253/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/9252 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9252/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9252/comments | https://api.github.com/repos/huggingface/transformers/issues/9252/events | https://github.com/huggingface/transformers/pull/9252 | 772,844,155 | MDExOlB1bGxSZXF1ZXN0NTQ0MDM2MzU1 | 9,252 | Fix TF BART for saved model creation | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Quick question on the context before going deeper into the PR: At the moment all \"fast\" and \"slow\" `TFBart` tests are passing. I thought model creation is already tested currently. What is the exact use case for which `TFBart` currently fails? Should we maybe add a new `modeling_tf_commen.py` test that would prevent other TF models from having the same error?",
"Currently the `test_saved_model_with_hidden_states_output` and `test_saved_model_with_attentions_output` are just partially testing the creation of a saved model. When we extend the experiments (such as using use_cache and force the output to be a dict) it fails because some part of the graph was not taken into account when running the slow tests.\r\n\r\nI'm currently working on having proper saved model and testing most of the possible cases to create them, and currently TF BART fails for some of them when using a \"real\" serving approach. You can test it by yourself by adding\r\n```\r\[email protected](input_signature=[{\r\n \"input_ids\": tf.TensorSpec((None, None), tf.int32, name=\"input_ids\"),\r\n \"attention_mask\": tf.TensorSpec((None, None), tf.int32, name=\"attention_mask\"),\r\n \"decoder_input_ids\": tf.TensorSpec((None, None), tf.int32, name=\"decoder_input_ids\"),\r\n \"decoder_attention_mask\": tf.TensorSpec((None, None), tf.int32, name=\"decoder_attention_mask\"),\r\n}])\r\ndef serving(self, inputs):\r\n output = self.call(inputs)\r\n \r\n return self.serving_output(output)\r\n\r\ndef serving_output(self, output):\r\n return TFSeq2SeqLMOutput(\r\n loss=None,\r\n logits=output.logits,\r\n past_key_values=output.past_key_values,\r\n decoder_hidden_states=tf.convert_to_tensor(output.decoder_hidden_states)\r\n if self.config.output_hidden_states\r\n else None,\r\n decoder_attentions=tf.convert_to_tensor(output.decoder_attentions)\r\n if self.config.output_attentions\r\n else None,\r\n encoder_last_hidden_state=output.encoder_last_hidden_state,\r\n encoder_hidden_states=tf.convert_to_tensor(output.encoder_hidden_states)\r\n if self.config.output_hidden_states\r\n else None,\r\n encoder_attentions=tf.convert_to_tensor(output.decoder_attentions)\r\n if self.config.output_attentions\r\n else None,\r\n )\r\n```\r\n\r\nTo the `TFBartForConditionalGeneration` and run:\r\n```\r\nfrom transformers import TFBartForConditionalGeneration\r\nmodel = TFBartForConditionalGeneration.from_pretrained(\"sshleifer/bart-tiny-random\")\r\nmodel.save(\"here\", include_optimizer=False, signatures=model.serving)\r\n```\r\n\r\nYou can see the following error:\r\n```\r\nValueError: 'combined_attention_mask' is None at the end of the else branch.\r\n```\r\n\r\nThis is because, as you can see, the given input is different than the one we test with `dummy_inputs` or in the tests. Here we are compiling a part of the graph that is not used in those cases, and when this part comes to be compiled, it fails.\r\n\r\nLessons learned: To properly test the creation of a savedmodel/graph compilation+execution we have to test as much inputs as possible in order to be sure that all the part of the graph can be compiled and executed.\r\n\r\nEDIT: I'm also 100% sure that BART is not the only model concerned about this.",
"Slow tests are passing as well - just verified on brutasse. PR looks good to me now"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the graph execution issue in order to make BART able to create a proper saved model.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9252/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9252",
"html_url": "https://github.com/huggingface/transformers/pull/9252",
"diff_url": "https://github.com/huggingface/transformers/pull/9252.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9252.patch",
"merged_at": 1608656825000
} |
https://api.github.com/repos/huggingface/transformers/issues/9251 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9251/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9251/comments | https://api.github.com/repos/huggingface/transformers/issues/9251/events | https://github.com/huggingface/transformers/pull/9251 | 772,752,683 | MDExOlB1bGxSZXF1ZXN0NTQzOTYwMTA3 | 9,251 | Model Templates for Seq2Seq | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Improvements to TFBart: https://github.com/huggingface/transformers/pull/9252 are now included in this PR as well."
] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds the possibility to generate Encoder-Decoder models via the cookie-cutter tool.
Model is correctly generated for PT and TF with all tests passing. Two test files are added.
These templates should very much facilitate the addition of Pegasus, Blenderbot, Marian as separate model files as well as adding BigBird, etc...
Please note that for now the safety checks: `# Copied from transformers.models.bart.modeling_bart...` are only added for very few layers because:
- Bart has some hacks that we should not copy for new models, but that we need to keep for backwards compatibility. E.g. positional embeddings have an offset hack leading to slightly too large positional embeddings, which we should not repeat (same as in RoBERTa), automatic creation of `decoder_input_ids` is a special feature and not the default case, Sinusoidal position embeddings are IMO also not general enough to be in the templates
- `modeling_bart.py` still has the `add_layer_norm` hacks which are not copied to the model templates. When Bart is separated into Pegasus, etc... those if-else hacks can be deleted from `modeling_bart.py` at which point some more `# Copied from transformers.models.bart.modeling_bart...` should be added to the Seq2Seq model templates
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9251/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9251",
"html_url": "https://github.com/huggingface/transformers/pull/9251",
"diff_url": "https://github.com/huggingface/transformers/pull/9251.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9251.patch",
"merged_at": 1608676881000
} |
https://api.github.com/repos/huggingface/transformers/issues/9250 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9250/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9250/comments | https://api.github.com/repos/huggingface/transformers/issues/9250/events | https://github.com/huggingface/transformers/issues/9250 | 772,706,070 | MDU6SXNzdWU3NzI3MDYwNzA= | 9,250 | ValueError: Tokenizer class T5Tokenizer does not exist or is not currently imported. | {
"login": "nsankar",
"id": 431890,
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsankar",
"html_url": "https://github.com/nsankar",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"repos_url": "https://api.github.com/users/nsankar/repos",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @nsankar,\r\n\r\nI cannot reproduce the above error concerning the tokenizer. The tokenizer is loaded correctly in my command line.\r\nHowever it seems like the model weights are not 100% correct.\r\n\r\n@mrm8488 when I load the model via:\r\n\r\n```python\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"mrm8488/mT5-small-finetuned-tydiqa-for-xqa\")\r\n```\r\n\r\nI get the following warning:\r\n```\r\n2020-12-22 11:59:05.111580: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2020-12-22 11:59:05.111618: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nSome weights of the model checkpoint at mrm8488/mT5-small-finetuned-tydiqa-for-xqa were not used when initializing T5ForConditionalGeneration: ['encoder.block.0.layer.1.DenseReluDense.wi.weight', 'encoder.block.1.layer.1.DenseReluDense.wi.weight', 'encoder.block.2.layer.1.DenseReluDense.wi.weight', 'encoder.block.3.layer.1.DenseReluDense.wi.weight', 'encoder.block.4.layer.1.DenseReluDense.wi.weight', 'encoder.block.5.layer.1.DenseReluDense.wi.weight', 'encoder.block.6.layer.1.DenseReluDense.wi.weight', 'encoder.block.7.layer.1.DenseReluDense.wi.weight', 'decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight', 'decoder.block.0.layer.2.DenseReluDense.wi.weight', 'decoder.block.1.layer.2.DenseReluDense.wi.weight', 'decoder.block.2.layer.2.DenseReluDense.wi.weight', 'decoder.block.3.layer.2.DenseReluDense.wi.weight', 'decoder.block.4.layer.2.DenseReluDense.wi.weight', 'decoder.block.5.layer.2.DenseReluDense.wi.weight', 'decoder.block.6.layer.2.DenseReluDense.wi.weight', 'decoder.block.7.layer.2.DenseReluDense.wi.weight']\r\n- This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of T5ForConditionalGeneration were not initialized from the model checkpoint at mrm8488/mT5-small-finetuned-tydiqa-for-xqa and are newly initialized: ['encoder.block.0.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.0.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.1.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.1.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.2.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.2.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.3.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.3.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.4.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.4.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.5.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.5.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.6.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.6.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.7.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.7.layer.1.DenseReluDense.wi_1.weight', 'decoder.block.0.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.0.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.1.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.1.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.2.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.2.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.3.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.3.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.4.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.4.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.5.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.5.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.6.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.6.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.7.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.7.layer.2.DenseReluDense.wi_1.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\n-> I think the weights uploaded here correspond to the \"old\" T5 version. It would be awesome if you could check the weights :-) \r\nAlso in the config: https://huggingface.co/mrm8488/mT5-small-finetuned-tydiqa-for-xqa/blob/main/config.json, the architecture `\"T5ForConditionalGeneration\"` is used as well as `\"t5\"` for the model type, but it should be `\"MT5ForConditionalGeneration\"` and `\"mt5\"` I think :-) ",
"Thanks @patrickvonplaten. I will check it out, ASAP.",
"It seems to happen with other models:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\ntokenizer = AutoTokenizer.from_pretrained(\"moussaKam/mbarthez\")\r\n\r\nTraceback (most recent call last):\r\n File \"/home/user/.local/share/virtualenvs/project/lib/python3.8/site-packages/IPython/core/interactiveshell.py\", line 3418, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-4-028660e65504>\", line 3, in <module>\r\n tokenizer = AutoTokenizer.from_pretrained(\"moussaKam/mbarthez\")\r\n File \"/home/user/.local/share/virtualenvs/project/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py\", line 359, in from_pretrained\r\n raise ValueError(\r\nValueError: Tokenizer class BarthezTokenizer does not exist or is not currently imported.\r\n```\r\n\r\nAnd:\r\n```\r\n(project) user@ubuntu:/mnt/workspace/project$ pip list | grep transformers\r\ntransformers 4.1.1\r\n```",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"I had a similar problem `ValueError: Tokenizer class M2M100Tokenizer does not exist or is not currently imported.` and solved it by running `pip install sentencepiece`\r\n\r\nSeems that when missing the `sentencepiece` package, `AutoTokenizer.from_pretrained` will silently not load the tokenizer and then crash later.",
"> I had a similar problem `ValueError: Tokenizer class M2M100Tokenizer does not exist or is not currently imported.` and solved it by running `pip install sentencepiece`\r\n> \r\n> Seems that when missing the `sentencepiece` package, `AutoTokenizer.from_pretrained` will silently not load the tokenizer and then crash later.\r\n\r\nThis works fabulously with DeBerta models as well, seems that the error isn't very descriptive.",
"I think on current master a better error message is given when `from_pretrained(...)` is called from a dummy object cc @sgugger :-)",
"> I had a similar problem `ValueError: Tokenizer class M2M100Tokenizer does not exist or is not currently imported.` and solved it by running `pip install sentencepiece`\r\n> \r\n> Seems that when missing the `sentencepiece` package, `AutoTokenizer.from_pretrained` will silently not load the tokenizer and then crash later.\r\n\r\nwhile it doesn't work for me. :-(\r\n\r\n`\r\ntokenizer = AutoTokenizer.from_pretrained(\"bigscience/bloom-560m\")\r\n\r\nValueError: Tokenizer class BloomTokenizerFast does not exist or is not currently imported.\r\n`",
"> > I had a similar problem `ValueError: Tokenizer class M2M100Tokenizer does not exist or is not currently imported.` and solved it by running `pip install sentencepiece`\r\n> > Seems that when missing the `sentencepiece` package, `AutoTokenizer.from_pretrained` will silently not load the tokenizer and then crash later.\r\n> \r\n> while it doesn't work for me. :-(\r\n> \r\n> ` tokenizer = AutoTokenizer.from_pretrained(\"bigscience/bloom-560m\")\r\n> \r\n> ValueError: Tokenizer class BloomTokenizerFast does not exist or is not currently imported. `\r\n\r\nwell, newest version of transformers works for me.",
"I'm getting the same error with transformers==4.26 when trying to load [ernie-m-base](https://huggingface.co/PaddlePaddle/ernie-m-base) with\r\n\r\n```\r\nMODEL_NAME = \"PaddlePaddle/ernie-m-base\" \r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True, model_max_length=max_length) # model_max_length=512\r\nmodel = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME, label2id=label2id, id2label=id2label).to(device) \r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/gpfs/home5/laurerm/nli-scratch/nli_training.py\", line 41, in <module>\r\n tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True, model_max_length=max_length) # model_max_length=512\r\n File \"/home/laurerm/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py\", line 655, in from_pretrained\r\n raise ValueError(\r\nValueError: Tokenizer class ErnieMTokenizer does not exist or is not currently imported.\r\n```\r\n\r\nThe exact same code worked two days ago with XLM-V. I've made sure that sentencepiece is installed.\r\n\r\nEdit: Ah I think the error currently comes up because ernie-m is on the hub, but not yet merged into master for transformers https://github.com/huggingface/transformers/pull/21349 (?)"
] | 1,608 | 1,676 | 1,614 | NONE | null | @mfuntowicz
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Latest transformers==4.2.0.dev0
- Platform: Colab
- Python version: Python 3.6.9
- PyTorch version (GPU?): torch==1.7.0+cu101
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz
## Information
The following code indicated in the latest HF news letter seems to have isssues when I tried
I get tokenizer error both under Fast and Slow (True/Flase tokenizer parameter) conditions when I had checked
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ]
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa",use_fast=False )
model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa")
context = "HuggingFace won the best Demo paper at EMNLP2020."
question = "What won HuggingFace?"
input_text = 'question: %s context: %s' % (question, context)
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(**features)
tokenizer.decode(output[0])
```
## To reproduce
Steps to reproduce the behavior:
1. Run the above code on Google Colab
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
-->
**ERROR reported**
`ValueError Traceback (most recent call last)
<ipython-input-3-87256159791c> in <module>()
10 from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
11
---> 12 tokenizer = AutoTokenizer.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa",use_fast=False )
13
14 model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa")
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
358 if tokenizer_class is None:
359 raise ValueError(
--> 360 "Tokenizer class {} does not exist or is not currently imported.".format(tokenizer_class_candidate)
361 )
362 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
ValueError: Tokenizer class T5Tokenizer does not exist or is not currently imported.`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9250/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9249 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9249/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9249/comments | https://api.github.com/repos/huggingface/transformers/issues/9249/events | https://github.com/huggingface/transformers/issues/9249 | 772,681,750 | MDU6SXNzdWU3NzI2ODE3NTA= | 9,249 | GPT2 distributed TPU pre-training using run_clm.py | {
"login": "mukhtar-algezoli",
"id": 38084259,
"node_id": "MDQ6VXNlcjM4MDg0MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/38084259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mukhtar-algezoli",
"html_url": "https://github.com/mukhtar-algezoli",
"followers_url": "https://api.github.com/users/mukhtar-algezoli/followers",
"following_url": "https://api.github.com/users/mukhtar-algezoli/following{/other_user}",
"gists_url": "https://api.github.com/users/mukhtar-algezoli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mukhtar-algezoli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mukhtar-algezoli/subscriptions",
"organizations_url": "https://api.github.com/users/mukhtar-algezoli/orgs",
"repos_url": "https://api.github.com/users/mukhtar-algezoli/repos",
"events_url": "https://api.github.com/users/mukhtar-algezoli/events{/privacy}",
"received_events_url": "https://api.github.com/users/mukhtar-algezoli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It clearly states you're out of hbm memory, which is the TPU memory from what Google tells me. I think you have to specify a lower batch size or a lower `block_size` (GPT-2 uses a very big one by default).",
"@sgugger yup, this was exactly what's wrong, my batch size (per device) is 8 so didn't consider that I am overloading the TPU but totally forgot about block_size (which defaults to 1024). \r\nThank you very much."
] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.0dev0
- Platform: Linux-4.9.0-14-amd64-x86_64-with-debian-9.13
- Python version: 3.6.10
- PyTorch version (GPU?): 1.8.0a0+5c3788d (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: yes using V3-8 TPUs
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ 1] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ 1] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.Create a GCP with pytorch/xla support
2.create a V3-8 TPU
3.install transformers and datasets libs and then run code:
```ruby
python3 transformers/examples/xla_spawn.py --num_cores=8 \
transformers/examples/language-modeling/run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir /tmp/test-clm
```
then this error comes up:
```ruby
Traceback (most recent call last):
File "transformers/examples/xla_spawn.py", line 85, in <module>
main()
File "transformers/examples/xla_spawn.py", line 81, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 394, in spawn
start_method=start_method)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 205, in start_processes
while not context.join():
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 160, in join
exit_code=exitcode
torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with exit code 17
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
and those exceptions come up before the error:
```ruby
[[{{node XRTCompile}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: Ran out of memory in memory space hbm. Used 16.61G of 15.98G hbm. Exceeded hbm capacity by 645.48M.
Total hbm usage >= 16.63G:
reserved 18.00M
program 13.93G
arguments 2.68G (100.0% utilization)
Output size 192.01M (100.0% utilization); shares 0B with arguments.
Program hbm requirement 13.93G:
global 4.0K
HLO temp 13.93G (91.5% utilization: Unpadded (12.75G) Padded (13.93G), 0.0% fragmentation (2.60M))
Largest program allocations in hbm:
1. Size: 1.53G
Shape: pred[8,1023,50257]{1,2,0:T(8,128)E(32)}
Unpadded size: 392.25M
Extra memory due to padding: 1.15G (4.0x expansion)
XLA label: %broadcast.4850.remat3 = pred[8,1023,50257]{1,2,0:T(8,128)E(32)} broadcast(pred[]{:T(256)E(32)} %constant.4065), dimensions={}
Allocation type: HLO temp
==========================
2. Size: 785.38M
Shape: bf16[8,1023,50257]{1,2,0:T(8,128
```
## Expected behavior
I don't think this is an OOM problem since I am using 8 cores TPU, so it must be an XLA multiprocessing problem.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9249/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9248 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9248/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9248/comments | https://api.github.com/repos/huggingface/transformers/issues/9248/events | https://github.com/huggingface/transformers/issues/9248 | 772,674,200 | MDU6SXNzdWU3NzI2NzQyMDA= | 9,248 | numpy ndarray type is not allowed on process pytorch model | {
"login": "LoveMeWithoutAll",
"id": 4844714,
"node_id": "MDQ6VXNlcjQ4NDQ3MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4844714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LoveMeWithoutAll",
"html_url": "https://github.com/LoveMeWithoutAll",
"followers_url": "https://api.github.com/users/LoveMeWithoutAll/followers",
"following_url": "https://api.github.com/users/LoveMeWithoutAll/following{/other_user}",
"gists_url": "https://api.github.com/users/LoveMeWithoutAll/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LoveMeWithoutAll/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoveMeWithoutAll/subscriptions",
"organizations_url": "https://api.github.com/users/LoveMeWithoutAll/orgs",
"repos_url": "https://api.github.com/users/LoveMeWithoutAll/repos",
"events_url": "https://api.github.com/users/LoveMeWithoutAll/events{/privacy}",
"received_events_url": "https://api.github.com/users/LoveMeWithoutAll/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nThanks for reporting this, we will apply a fix for the next release!",
"@LoveMeWithoutAll thanks for issue!\r\n\r\nCould you specific your use case here a bit? Do you want to convert a PyTorch model to a tensorflow model and consequently train the tensorflow model? Why do we need to forward `ndarray` types?",
"@patrickvonplaten Hello\r\nI'm using [Rasa](https://rasa.com) framework that embedding HuggingFace. Rasa is not yet support Pytorch model, so I must convert from Pytorch to TF model for using pre-trained model. Then `ndarray` is needed for convert model.",
"@LoveMeWithoutAll Hi, I am also using the Rasa framework with HFTransformers and Language Models in pipeline config. Faced the same issue with the latest transformers, but it works with lower versions, transformers-2.9.0. I haven't tested out on other versions (probably some 3.x might work too!), but 2.9.0 shall work fine if your project setup allows lower version transformers.",
"> @LoveMeWithoutAll Hi, I am also using the Rasa framework with HFTransformers and Language Models in pipeline config. Faced the same issue with the latest transformers, but it works with lower versions, transformers-2.9.0. I haven't tested out on other versions (probably some 3.x might work too!), but 2.9.0 shall work fine if your project setup allows lower version transformers.\n\nthank you for your advice! i'll try it as you did",
"This was fixed on `master`: https://github.com/huggingface/transformers/pull/9294\r\n\r\nWe'll release a new version tomorrow, which will benefit from this change. Thanks for reporting it!",
"Thank you for your effort!"
] | 1,608 | 1,612 | 1,612 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
tensorflow: @jplu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [o] my own modified scripts: (give details below)
Changed 1 line that `from_pt`'s default value from `False` to `True`((https://github.com/huggingface/transformers/blob/ec07da65e25562040581febaf9b400a462962961/src/transformers/modeling_tf_utils.py#L947))
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [o] my own task or dataset: (give details below)
HuggingFace: monologg/kobert
I loaded my pertained pytorch model, and an error occurred on input_processing function
## To reproduce
Steps to reproduce the behavior:
1. run input_processing
2. numpy ndarray is not allowed type(https://github.com/huggingface/transformers/blob/ec07da65e25562040581febaf9b400a462962961/src/transformers/modeling_tf_utils.py#L331)
3. So ndarray cannot be processed(https://github.com/huggingface/transformers/blob/ec07da65e25562040581febaf9b400a462962961/src/transformers/modeling_tf_utils.py#L354)
4. Of course there is not any proper changing for ndarray not as other types(dict, Tensor and etc)
5. Error occurred like below
```
File "/Users/ys/dev/rasa/venv/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 357, in input_processing
raise ValueError(f"Data of type {type(v)} is not allowed only {allowed_types} is accepted for {k}.")
ValueError: Data of type <class 'numpy.ndarray'> is not allowed only (<class 'tensorflow.python.framework.ops.Tensor'>, <class 'bool'>, <class 'int'>, <class 'transformers.file_utils.ModelOutput'>, <class 'tuple'>, <class 'list'>, <class 'dict'>) is accepted for attention_mask.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Train success! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9248/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9247 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9247/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9247/comments | https://api.github.com/repos/huggingface/transformers/issues/9247/events | https://github.com/huggingface/transformers/issues/9247 | 772,641,305 | MDU6SXNzdWU3NzI2NDEzMDU= | 9,247 | T5 tokenizer.vocab_size and config.vocab_size mismatch? | {
"login": "ArvinZhuang",
"id": 46237844,
"node_id": "MDQ6VXNlcjQ2MjM3ODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/46237844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArvinZhuang",
"html_url": "https://github.com/ArvinZhuang",
"followers_url": "https://api.github.com/users/ArvinZhuang/followers",
"following_url": "https://api.github.com/users/ArvinZhuang/following{/other_user}",
"gists_url": "https://api.github.com/users/ArvinZhuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArvinZhuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArvinZhuang/subscriptions",
"organizations_url": "https://api.github.com/users/ArvinZhuang/orgs",
"repos_url": "https://api.github.com/users/ArvinZhuang/repos",
"events_url": "https://api.github.com/users/ArvinZhuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArvinZhuang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of https://github.com/huggingface/transformers/issues/4875.",
"I see, I simply ignored this mismatch and seems nothing wrong with prediction.\r\nThank you!"
] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1
- tokenizers: 0.9.4
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
Hi @patrickvonplaten
I am trying to train a "t5-base" model and I directly use from_pretrained tokenizer, config and model. However, I found the vocabulary size given by the tokenizer and config is different (see to reproduce). Does this is expected?
If I use the model `T5ForConditionalGeneration.from_pretrained('t5-base', config=config)` to do predictions, this will result in the last dimension of lm_logits is different from `tokenizer.vocab_size`.
## To reproduce
Steps to reproduce the behavior:
```
>>> from transformers import T5Tokenizer, T5Config
>>> tokenizer = T5Tokenizer.from_pretrained("t5-base")
>>> config = T5Config.from_pretrained("t5-base")
>>> print(tokenizer.vocab_size)
32100
>>> print(config.vocab_size)
32128
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
```
>>> print(tokenizer.vocab_size)
32128
>>> print(config.vocab_size)
32128
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9247/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9247/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9246 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9246/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9246/comments | https://api.github.com/repos/huggingface/transformers/issues/9246/events | https://github.com/huggingface/transformers/issues/9246 | 772,630,791 | MDU6SXNzdWU3NzI2MzA3OTE= | 9,246 | AssertionError: Non-consecutive added token '<pad>' found. Should have index 40002 but has index 40000 in saved vocabulary | {
"login": "thesby",
"id": 10773886,
"node_id": "MDQ6VXNlcjEwNzczODg2",
"avatar_url": "https://avatars.githubusercontent.com/u/10773886?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thesby",
"html_url": "https://github.com/thesby",
"followers_url": "https://api.github.com/users/thesby/followers",
"following_url": "https://api.github.com/users/thesby/following{/other_user}",
"gists_url": "https://api.github.com/users/thesby/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thesby/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thesby/subscriptions",
"organizations_url": "https://api.github.com/users/thesby/orgs",
"repos_url": "https://api.github.com/users/thesby/repos",
"events_url": "https://api.github.com/users/thesby/events{/privacy}",
"received_events_url": "https://api.github.com/users/thesby/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think the problem is that the saved tokenizer saves `len(tokenizer)` = 40002. So when I load it, the added tokens id starts from 40000, the error occurs.",
"Hey @thesby,\r\n\r\nDid you add any special tokens to `XLMRobertaTokenizer` that weren't there previously? \r\nCould you copy/paste the code you used to train the tokenizer here as well? Thanks!",
"All special tokens were added by sentencepiece trainer. I have never add by myself.\r\n```\r\nLD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib64/ spm_train --input=main.txt --model_prefix=sentencepice3.bpe --vocab_size=40000 --character_coverage=0.9995 --model_type=bpe --max_sentencepiece_length=4 --num_threads=64 --split_digits=true --input_sentence_size=2000000 -shuffle_input_sentence=true\r\n```",
"Maybe I should train with `unigram`, not `bpe`.",
"@thesby have you solved this problem?",
"Hi,\r\n\r\nI'm having a similar problem. \r\n\r\n```\r\nfrom transformers import GPT2Tokenizer\r\n\r\nclass VisualCometTokenizer(GPT2Tokenizer):\r\n def __init__(self,\r\n vocab_file,\r\n merges_file,\r\n errors='replace',\r\n unk_token=\"<|endoftext|>\",\r\n bos_token=\"<|endoftext|>\",\r\n eos_token=\"<|endoftext|>\",\r\n begin_img=\"<|b_img|>\",\r\n end_img=\"<|e_img|>\",\r\n begin_event=\"<|b_ev|>\",\r\n end_event=\"<|e_ev|>\",\r\n begin_place=\"<|b_pl|>\",\r\n end_place=\"<|e_pl|>\",\r\n begin_inferences={'before': \"<|before|>\", 'intent': \"<|intent|>\", 'after': \"<|after|>\"},\r\n end_inference=\"<|e_in|>\",\r\n **kwargs):\r\n super(VisualCometTokenizer, self).__init__(\r\n vocab_file,\r\n merges_file,\r\n errors=errors,\r\n bos_token=bos_token,\r\n eos_token=eos_token,\r\n unk_token=unk_token,\r\n **kwargs\r\n )\r\n\r\n self.begin_img = begin_img\r\n self.end_img = end_img\r\n self.begin_event = begin_event\r\n self.end_event = end_event\r\n self.begin_place = begin_place\r\n self.end_place = end_place\r\n self.begin_inferences = begin_inferences\r\n self.end_inference = end_inference\r\n self.det_tokens = ['<|det%d|>' % i for i in range(50)]\r\n self.add_special_tokens({\r\n \"additional_special_tokens\": [self.begin_img, self.end_img, self.begin_event, self.end_event,\r\n self.begin_place, self.end_place, self.end_inference]\r\n + list(self.begin_inferences.values()) + self.det_tokens\r\n })\r\n\r\ntokenizer = VisualCometTokenizer.from_pretrained(\"gpt2\")\r\ntokenizer.save_pretrained(\"/content/test\")\r\n\r\ntokenizer =VisualCometTokenizer.from_pretrained(\"/content/test\")\r\n\r\n```\r\n\r\nWill cause this:\r\n\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<ipython-input-34-e955f380827e> in <module>()\r\n 46 tokenizer.save_pretrained(\"/content/test\")\r\n 47 \r\n---> 48 tokenizer =VisualCometTokenizer.from_pretrained(\"/content/test\")\r\n\r\n1 frames\r\n/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs)\r\n 1809 for token, index in added_tok_encoder_sorted:\r\n 1810 assert index == len(tokenizer), (\r\n-> 1811 f\"Non-consecutive added token '{token}' found. \"\r\n 1812 f\"Should have index {len(tokenizer)} but has index {index} in saved vocabulary.\"\r\n 1813 )\r\n\r\nAssertionError: Non-consecutive added token '<|b_img|>' found. Should have index 50317 but has index 50257 in saved vocabulary.\r\n\r\n\r\n++\r\n\r\nI moved the \"add_special_tokens\" outside of init and it loads fine. You have to add tokens outside of init and save and when you load the tokenizer again, it won't try to add the tokens twice.\r\n\r\nBetter fix would be to permit adding the same token to the tokenizer again or throw a warning in huggingface's tokenizer_utils.py",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | NONE | null | torch: 1.6.0
transformers: 3.5.1
OS: centos 7
GPU: A100
I trained a sentencepiece bpe model. There is no problem if I load it with `XLMRobertaTokenizer`. But when I load with `XLMRobertaTokenizerFast`, it cost a long time in `transformers/convert_slow_tokenizer.py`. After save the tokenizer and load it `from_pretrained`, the error occurs:
```
/ProjectRoot/tp_origin/pyenv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs)
1758 for token, index in added_tok_encoder_sorted:
1759 assert index == len(tokenizer), (
-> 1760 f"Non-consecutive added token '{token}' found. "
1761 f"Should have index {len(tokenizer)} but has index {index} in saved vocabulary."
1762 )
AssertionError: Non-consecutive added token '<pad>' found. Should have index 40002 but has index 40000 in saved vocabulary.
```
I find that a file `added_tokens.json` is created with content `{"<pad>": 40000, "<mask>": 40001}`.
my tokenizer
```
PreTrainedTokenizerFast(name_or_path='/ProjectRoot/tp_origin/distillation/tmp/student_init_model3/0_Transformer', vocab_size=40000, model_max_len=514, is_fast=True, padding_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'sep_token': '</s>', 'pad_token': '<pad>', 'cls_token': '<s>', 'mask_token': '<mask>'})
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9246/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9245 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9245/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9245/comments | https://api.github.com/repos/huggingface/transformers/issues/9245/events | https://github.com/huggingface/transformers/issues/9245 | 772,628,462 | MDU6SXNzdWU3NzI2Mjg0NjI= | 9,245 | [s2s] test_finetune_trainer_slow fails when run in group | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I also have the same failure on one GPU on my side FYI (but no failure when run on its own).",
"Thank you, @sgugger - your input helped a lot to reduce the sequence quickly! So this sequence fails:\r\n```\r\nCUDA_VISIBLE_DEVICES=0 RUN_SLOW=1 pytest \\\r\ntest_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_apex \\ \r\ntest_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow\r\n```\r\nSomething about apex.\r\n\r\nThis sequence with another similar test before it but no apex doesn't fail:\r\n```\r\nCUDA_VISIBLE_DEVICES=0 RUN_SLOW=1 pytest \\\r\ntest_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_no_dist \\\r\ntest_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow\r\n```",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | CONTRIBUTOR | null | On dual gpu when running `test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow` alone - all is good.
when running it with all the other tests in that file it fails:
```
RUN_SLOW=1 pytest -sv test_finetune_trainer.py
[...]
self = <seq2seq.test_finetune_trainer.TestFinetuneTrainer testMethod=test_finetune_trainer_slow>
@slow
def test_finetune_trainer_slow(self):
# There is a missing call to __init__process_group somewhere
output_dir = self.run_trainer(
eval_steps=2, max_len="128", model_name=MARIAN_MODEL, num_train_epochs=10, distributed=False
)
# Check metrics
logs = TrainerState.load_from_json(os.path.join(output_dir, "trainer_state.json")).log_history
eval_metrics = [log for log in logs if "eval_loss" in log.keys()]
first_step_stats = eval_metrics[0]
last_step_stats = eval_metrics[-1]
> assert first_step_stats["eval_bleu"] < last_step_stats["eval_bleu"] # model learned nothing
E AssertionError: assert 0.0 < 0.0
test_finetune_trainer.py:130: AssertionError
----------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------
WARNING seq2seq.finetune_trainer:finetune_trainer.py:160 Process rank: -1, device: cuda:0, n_gpu: 2, distributed training: False, 16-bits training: False
==================================================================== short test summary info ====================================================================
FAILED test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow - AssertionError: assert 0.0 < 0.0
================================================ 1 failed, 7 passed, 1 skipped, 17 warnings in 102.82s (0:01:42) ================================================
```
For some reason it fails to learn anything when some other tests run before it.
tested with pytorch-nightly + py38.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9245/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9244 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9244/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9244/comments | https://api.github.com/repos/huggingface/transformers/issues/9244/events | https://github.com/huggingface/transformers/issues/9244 | 772,614,539 | MDU6SXNzdWU3NzI2MTQ1Mzk= | 9,244 | BatchEncoding.to accepted types too restrictive | {
"login": "jethrokuan",
"id": 1667473,
"node_id": "MDQ6VXNlcjE2Njc0NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1667473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jethrokuan",
"html_url": "https://github.com/jethrokuan",
"followers_url": "https://api.github.com/users/jethrokuan/followers",
"following_url": "https://api.github.com/users/jethrokuan/following{/other_user}",
"gists_url": "https://api.github.com/users/jethrokuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jethrokuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jethrokuan/subscriptions",
"organizations_url": "https://api.github.com/users/jethrokuan/orgs",
"repos_url": "https://api.github.com/users/jethrokuan/repos",
"events_url": "https://api.github.com/users/jethrokuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jethrokuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-4.14.81.bm.15-amd64-x86_64-with-debian-9.11
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
## Information
In `BatchEncoding.to`, the only accepted class types are `str` and `torch.device`. I think some libraries like pytorch-lightning call `.to` with the integer value for the GPU number, and HF complains about this when it is perfectly valid:
```
>>> x = torch.zeros(1)
>>> x.to(0)
tensor([0.], device='cuda:0')
```
## Expected behavior
Also allow int values in BatchEncoding.to | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9244/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9243 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9243/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9243/comments | https://api.github.com/repos/huggingface/transformers/issues/9243/events | https://github.com/huggingface/transformers/issues/9243 | 772,570,214 | MDU6SXNzdWU3NzI1NzAyMTQ= | 9,243 | AssertionError with model_parallel in run_clm.py | {
"login": "laphang",
"id": 24724502,
"node_id": "MDQ6VXNlcjI0NzI0NTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laphang",
"html_url": "https://github.com/laphang",
"followers_url": "https://api.github.com/users/laphang/followers",
"following_url": "https://api.github.com/users/laphang/following{/other_user}",
"gists_url": "https://api.github.com/users/laphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laphang/subscriptions",
"organizations_url": "https://api.github.com/users/laphang/orgs",
"repos_url": "https://api.github.com/users/laphang/repos",
"events_url": "https://api.github.com/users/laphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/laphang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I'm not the expert on the model parallel feature, but I think it's not supposed to be launched with `torch.distributed` as it will only use one process, then split the layers of your model on several GPUs.",
"Thanks for the fast response @sgugger \r\n\r\nI tried your suggestion and ran the following (removing the torch.distributed.launch):\r\n```\r\npython run_clm.py \\\r\n --do_train \\\r\n --do_eval \\\r\n --fp16 \\\r\n --logging_first_step \\\r\n --model_parallel \\\r\n --evaluation_strategy epoch \\\r\n --logging_steps 50 \\\r\n --model_name_or_path gpt2 \\\r\n --model_type gpt2 \\\r\n --num_train_epochs 1 \\\r\n --output_dir /opt/ml/model/ \\\r\n --per_device_eval_batch_size 2 \\\r\n --per_device_train_batch_size 2 \\\r\n --save_steps 50 \\\r\n --save_total_limit 1 \\\r\n --train_file /opt/ml/input/data/data/train.txt \\\r\n --validation_file /opt/ml/input/data/data/val.txt\r\n```\r\n\r\nI then see this error:\r\n```\r\n File \"run_clm.py\", line 374, in <module>\r\n main()\r\n File \"run_clm.py\", line 344, in main\r\n trainer.train(model_path=model_path)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/trainer.py\", line 799, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/trainer.py\", line 1137, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/trainer.py\", line 1163, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 732, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py\", line 895, in forward\r\n return_dict=return_dict,\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 732, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py\", line 681, in forward\r\n inputs_embeds = self.wte(input_ids)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 732, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/sparse.py\", line 126, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py\", line 1814, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select\r\n```",
"Like I said, not the parallel expert, you should try tagging the person who added the functionality :-)",
"@laphang Somewhere **data parallelism** is being triggered in run_clm.py. I highly suspect it's because you set `per_device_train_batch_size` and `per_device_eval_batch_size` to a value greater than 1, so the script is probably confused. You indicated you wanted to use **model parallelism** -- i.e. _split a single model into pieces and distribute those pieces across several devices_ so that, for example, the embedding layers and the first several attention blocks are only on the first GPU. A sample starts on the first GPU and is automatically handed off to another GPU as it goes through the mode. Then you indicated that you want to assign different batches to different GPUs when they all have to start on the first GPU. This might not be a problem though because it actually sounds like you want data parallelism, which _duplicates the model to each device to train a larger batch_. You don't need model parallelism for that. Data parallelism is the default behavior of Trainer. So to summarize:\r\n\r\n- **Model parallelism**: let's you train bigger models (e.g. gpt2-xl). Set the `per_device_eval_batch_size `and `per_device_train_batch_size `to 1.\r\n- **Data parallelism**: let's you train bigger batch sizes by duplicating the model to several GPUs and training on more samples at the same time. Set `model_parallel `to false and the trainer will automatically default to data parallelism when you have more than one GPU.",
"@sgugger Short term, we need to add this to `TrainingArguments`:\r\n```\r\nif self.model_parallel:\r\n assert self.per_device_train_batch_size == 1, \"Model is parallelized, but per_device_train_batch_size is not 1. Model parallelism only supports a batch size of one at this time.\"\r\n assert self.per_device_eval_batch_size == 1, \"Model is parallelized, but per_device_eval_batch_size is not 1. Model parallelism only supports a batch size of one at this time.\"\r\n```\r\n\r\nIn the long-term, we need to figure out how to enable batches for model parallelism. Batches aren't assigned to devices, so the current arguments `per_device`... only makes sense for data parallelism. \r\n\r\n",
"Hi @alexorona, thanks for the quick response. I've just gotten back from the Christmas / New year break, and getting back into things. \r\n\r\nI tried setting the batch_sizes to 1, but I still seem to get basically the same errors. (I also switched to gpt2-large)\r\n\r\nA) when running this:\r\n```\r\npython -m torch.distributed.launch \\\r\n --nproc_per_node 4 run_clm.py \\\r\n --do_train \\\r\n --do_eval \\\r\n --fp16 \\\r\n --logging_first_step \\\r\n --model_parallel \\\r\n --evaluation_strategy epoch \\\r\n --logging_steps 50 \\\r\n --model_name_or_path gpt2-large \\\r\n --model_type gpt2 \\\r\n --num_train_epochs 1 \\\r\n --output_dir /opt/ml/model/ \\\r\n --per_device_eval_batch_size 1 \\\r\n --per_device_train_batch_size 1 \\\r\n --save_steps 50 \\\r\n --save_total_limit 1 \\\r\n --train_file /opt/ml/input/data/data/train.txt \\\r\n --validation_file /opt/ml/input/data/data/val.txt\r\n```\r\n\r\nI get this:\r\n```\r\nTraceback (most recent call last):\r\n File \"run_clm.py\", line 374, in <module>\r\n main()\r\n File \"run_clm.py\", line 344, in main\r\n trainer.train(model_path=model_path)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/trainer.py\", line 681, in train\r\n else True\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py\", line 282, in __init__\r\n#015 86%|âââââââââ | 56/65 [00:04<00:00, 12.03ba/s] ).format(device_ids, output_device, {p.device for p in module.parameters()})\r\nAssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [1], output_device 1, and module parameters {device(type='cpu')}.\r\n#015 86%|âââââââââ | 56/65 [00:04<00:00, 11.81ba/s]#015 89%|âââââââââ | 58/65 [00:04<00:00, 12.31ba/s]#015 89%|âââââââââ | 58/65 [00:04<00:00, 12.26ba/s]#015 89%|âââââââââ | 58/65 [00:04<00:00, 12.08ba/s]#015 92%|ââââââââââ| 60/65 [00:04<00:00, 12.52ba/s]#015 92%|ââââââââââ| 60/65 [00:04<00:00, 12.40ba/s]#015 92%|ââââââââââ| 60/65 [00:04<00:00, 12.21ba/s]#015 95%|ââââââââââ| 62/65 [00:04<00:00, 12.58ba/s]#015 95%|ââââââââââ| 62/65 [00:04<00:00, 12.47ba/s]#015 95%|ââââââââââ| 62/65 [00:05<00:00, 12.22ba/s]#015 98%|ââââââââââ| 64/65 [00:05<00:00, 12.20ba/s]#015100%|ââââââââââ| 65/65 [00:05<00:00, 12.64ba/s]\r\nTraceback (most recent call last):\r\n File \"run_clm.py\", line 374, in <module>\r\n main()\r\n File \"run_clm.py\", line 344, in main\r\n trainer.train(model_path=model_path)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/trainer.py\", line 681, in train\r\n else True\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py\", line 282, in __init__\r\n ).format(device_ids, output_device, {p.device for p in module.parameters()})\r\nAssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [2], output_device 2, and module parameters {device(type='cpu')}.\r\n#015 98%|ââââââââââ| 64/65 [00:05<00:00, 12.09ba/s]#015100%|ââââââââââ| 65/65 [00:05<00:00, 12.56ba/s]\r\nTraceback (most recent call last):\r\n File \"run_clm.py\", line 374, in <module>\r\n main()\r\n File \"run_clm.py\", line 344, in main\r\n trainer.train(model_path=model_path)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/trainer.py\", line 681, in train\r\n else True\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py\", line 282, in __init__\r\n ).format(device_ids, output_device, {p.device for p in module.parameters()})\r\nAssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [3], output_device 3, and module parameters {device(type='cpu')}.\r\n#015 98%|ââââââââââ| 64/65 [00:05<00:00, 11.61ba/s]#015100%|ââââââââââ| 65/65 [00:05<00:00, 12.42ba/s]\r\n[INFO|trainer.py:388] 2021-01-04 23:04:51,956 >> The following columns in the training set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: .\r\n[INFO|trainer.py:388] 2021-01-04 23:04:51,957 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: .\r\nTraceback (most recent call last):\r\n File \"run_clm.py\", line 374, in <module>\r\n main()\r\n File \"run_clm.py\", line 344, in main\r\n trainer.train(model_path=model_path)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/trainer.py\", line 681, in train\r\n else True\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py\", line 282, in __init__\r\n ).format(device_ids, output_device, {p.device for p in module.parameters()})\r\nAssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [0], output_device 0, and module parameters {device(type='cpu')}.\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/opt/conda/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py\", line 261, in <module>\r\n main()\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py\", line 257, in main\r\n cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'run_clm.py', '--local_rank=3', '--do_train', '--do_eval', '--fp16', '--logging_first_step', '--model_parallel', '--evaluation_strategy', 'epoch', '--logging_steps', '50', '--model_name_or_path', 'gpt2-large', '--model_type', 'gpt2', '--num_train_epochs', '1', '--output_dir', '/opt/ml/model/', '--per_device_eval_batch_size', '1', '--per_device_train_batch_size', '1', '--save_steps', '50', '--save_total_limit', '1', '--train_file', '/opt/ml/input/data/data/train.txt', '--validation_file', '/opt/ml/input/data/data/val.txt']' returned non-zero exit status 1.\r\n```\r\n\r\nB) when running this:\r\n```\r\npython run_clm.py \\\r\n --do_train \\\r\n --do_eval \\\r\n --fp16 \\\r\n --logging_first_step \\\r\n --model_parallel \\\r\n --evaluation_strategy epoch \\\r\n --logging_steps 50 \\\r\n --model_name_or_path gpt2-large \\\r\n --model_type gpt2 \\\r\n --num_train_epochs 1 \\\r\n --output_dir /opt/ml/model/ \\\r\n --per_device_eval_batch_size 1 \\\r\n --per_device_train_batch_size 1 \\\r\n --save_steps 50 \\\r\n --save_total_limit 1 \\\r\n --train_file /opt/ml/input/data/data/train.txt \\\r\n --validation_file /opt/ml/input/data/data/val.txt\r\n```\r\nI get this:\r\n```\r\nTraceback (most recent call last):\r\n File \"run_clm.py\", line 374, in <module>\r\n main()\r\n File \"run_clm.py\", line 344, in main\r\n trainer.train(model_path=model_path)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/trainer.py\", line 799, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/trainer.py\", line 1137, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/trainer.py\", line 1163, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 732, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py\", line 895, in forward\r\n return_dict=return_dict,\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 732, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py\", line 681, in forward\r\n inputs_embeds = self.wte(input_ids)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 732, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/sparse.py\", line 126, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py\", line 1814, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select\r\n#015 0%| | 0/813 [00:00<?, ?it/s]\r\n```\r\n\r\nI had a few questions / comments:\r\n1) For model_parallel, am I supposed to use torch.distributed.launch or not? I wasn't 100% clear on that. \r\n2) Seems like I'm still getting similar errors with batch_sizes 1 as I was getting previously, any other thoughts on what the issue is?\r\n3) For some use cases, I would be interested in using model parallelism and data parallelism together (e.g. for models that currently just fit on the gpu memory with batch size 1 or 2 - I am presuming that splitting the model with model parallelism would allow space in the gpu memory for larger batch sizes and increase speed?). So would definitely be interested in any future changes that allow for that. (per your last comment)",
"FYI I noticed that this was included in v4.2.0, removing model_parallel arg from trainer. It hadn't been made to work yet.\r\nhttps://github.com/huggingface/transformers/pull/9451\r\n\r\nI'll wait for it to be included in the trainer. \r\n",
"It is included, there is just no need for the flag that wasn't doing anything special (parallelizing the model was the user's responsibility and still is). We just automatically detect if the model is parallelized now, without needing the flag.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: AWS Sagemaker
- Python version: 3.6
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?):
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: YES
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@LysandreJik @sgugger @alexorona
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
I am fine-tuning GPT2 using my own dataset, using the examples/language-modelling/run_clm.py script, and I want to use the new model_parallel feature in v4.1.1. I am using a multi-gpu instance (AWS p3.8xlarge - with 4 gpus).
But I get this error:
AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [2], output_device 2, and module parameters {device(type='cpu')}.
## To reproduce
Steps to reproduce the behavior:
1. Run run_clm.py with the following params:
```
python -m torch.distributed.launch \
--nproc_per_node 4 run_clm.py \
--do_train \
--do_eval \
--fp16 \
--logging_first_step \
--model_parallel \
--evaluation_strategy epoch \
--logging_steps 50 \
--model_name_or_path gpt2 \
--model_type gpt2 \
--num_train_epochs 1 \
--output_dir /opt/ml/model/ \
--per_device_eval_batch_size 2 \
--per_device_train_batch_size 2 \
--save_steps 50 \
--save_total_limit 1 \
--train_file /opt/ml/input/data/data/train.txt \
--validation_file /opt/ml/input/data/data/val.txt
```
2. I get this error when training starts:
```
Traceback (most recent call last):
File "run_clm.py", line 374, in <module>
main()
File "run_clm.py", line 344, in main
trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train
else True
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__
#015 95%|ââââââââââ| 62/65 [00:04<00:00, 12.58ba/s] ).format(device_ids, output_device, {p.device for p in module.parameters()})
AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [1], output_device 1, and module parameters {device(type='cpu')}.
#015 98%|ââââââââââ| 64/65 [00:05<00:00, 12.26ba/s]#015100%|ââââââââââ| 65/65 [00:05<00:00, 12.61ba/s]
Traceback (most recent call last):
File "run_clm.py", line 374, in <module>
main()
File "run_clm.py", line 344, in main
trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train
else True
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__
).format(device_ids, output_device, {p.device for p in module.parameters()})
AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [2], output_device 2, and module parameters {device(type='cpu')}.
#015 95%|ââââââââââ| 62/65 [00:05<00:00, 11.84ba/s]#015 98%|ââââââââââ| 64/65 [00:05<00:00, 12.18ba/s]#015100%|ââââââââââ| 65/65 [00:05<00:00, 12.60ba/s]
Traceback (most recent call last):
File "run_clm.py", line 374, in <module>
main()
File "run_clm.py", line 344, in main
trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train
else True
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__
).format(device_ids, output_device, {p.device for p in module.parameters()})
AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [3], output_device 3, and module parameters {device(type='cpu')}.
#015 98%|ââââââââââ| 64/65 [00:05<00:00, 11.50ba/s]#015100%|ââââââââââ| 65/65 [00:05<00:00, 12.48ba/s]
[INFO|trainer.py:388] 2020-12-22 00:54:34,892 >> The following columns in the training set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: .
[INFO|trainer.py:388] 2020-12-22 00:54:34,892 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: .
Traceback (most recent call last):
File "run_clm.py", line 374, in <module>
main()
File "run_clm.py", line 344, in main
trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train
else True
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__
).format(device_ids, output_device, {p.device for p in module.parameters()})
AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [0], output_device 0, and module parameters {device(type='cpu')}.
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 261, in <module>
main()
File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 257, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'run_clm.py', '--local_rank=3', '--do_train', '--do_eval', '--fp16', '--logging_first_step', '--model_parallel', '--evaluation_strategy', 'epoch', '--logging_steps', '50', '--model_name_or_path', 'gpt2', '--model_type', 'gpt2', '--num_train_epochs', '1', '--output_dir', '/opt/ml/model/', '--per_device_eval_batch_size', '2', '--per_device_train_batch_size', '2', '--save_steps', '50', '--save_total_limit', '1', '--train_file', '/opt/ml/input/data/data/train.txt', '--validation_file', '/opt/ml/input/data/data/val.txt']' returned non-zero exit status 1.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expect the fine-tuning script to run successfully.
If I remove the --model_parallel in the args, then it does run successfully in distributed mode. But I want to use this new feature to reduce memory usage, and increase batch_size
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9243/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9242 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9242/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9242/comments | https://api.github.com/repos/huggingface/transformers/issues/9242/events | https://github.com/huggingface/transformers/issues/9242 | 772,556,662 | MDU6SXNzdWU3NzI1NTY2NjI= | 9,242 | Load from a TF 1.0 checkpoint in modeling_tf_utils.py | {
"login": "vsuarezpaniagua",
"id": 4960468,
"node_id": "MDQ6VXNlcjQ5NjA0Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4960468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vsuarezpaniagua",
"html_url": "https://github.com/vsuarezpaniagua",
"followers_url": "https://api.github.com/users/vsuarezpaniagua/followers",
"following_url": "https://api.github.com/users/vsuarezpaniagua/following{/other_user}",
"gists_url": "https://api.github.com/users/vsuarezpaniagua/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vsuarezpaniagua/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vsuarezpaniagua/subscriptions",
"organizations_url": "https://api.github.com/users/vsuarezpaniagua/orgs",
"repos_url": "https://api.github.com/users/vsuarezpaniagua/repos",
"events_url": "https://api.github.com/users/vsuarezpaniagua/events{/privacy}",
"received_events_url": "https://api.github.com/users/vsuarezpaniagua/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @vsuarezpaniagua,\r\n\r\nGreat point! I noticed the same thing actually a couple of days ago as well with @jplu. I think we should add this functionality to `modeling_tf_utils.py`. It should be very similar to how it's done in the corresponding code in `modeling_utils.py`, and would require a new `load_tf1_weights` for TF2 models. \r\n\r\nPinging @jplu, @LysandreJik, @sgugger here as well for some brainstorming on the importance of this feature request and how to best design it if neeed.",
"Thank you for taking it into consideration. Also, I saw that the _**EvaluationStrategy**_ for _epoch_ is not working using it in _training_args_tf.py_ for building a [TFTrainer](https://github.com/huggingface/transformers/blob/ec07da65e25562040581febaf9b400a462962961/src/transformers/trainer_tf.py#L49) in _trainer_tf.py_. And I think this is because there are not _self.control.should_evaluate_ or _self.control.should_save_ as there are in the Torch implementations _trainer.py_ and _training_args.py_. Having similar code for both implementations could solve all these problems and easier to follow.\r\n\r\n> Hey @vsuarezpaniagua,\r\n> \r\n> Great point! I noticed the same thing actually a couple of days ago as well with @jplu. I think we should add this functionality to `modeling_tf_utils.py`. It should be very similar to how it's done in the corresponding code in `modeling_utils.py`, and would require a new `load_tf1_weights` for TF2 models.\r\n> \r\n> Pinging @jplu, @LysandreJik, @sgugger here as well for some brainstorming on the importance of this feature request and how to best design it if neeed.\r\n\r\n",
"The TF Trainer is off of maintenance since a while in order to be rethought when we can dedicate a bit of time to it. Not the current TF priority unfortunately. But at some point it is our plan to make the TF Trainer catching up his late on the PT one.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | In the file modeling_utils.py, we can load a TF 1.0 checkpoint as is indicated in this [line](https://github.com/huggingface/transformers/blob/fb650df8590f796663226132482d09da5b0fb613/src/transformers/modeling_utils.py#L930). However, in the file modeling_tf_utils.py, which is the same version for TF, we can not load models from TF 1.0, and it says expecifically that you can as:
` >>> model = BertModel.from_pretrained('./tf_model/my_tf_checkpoint.ckpt.index', from_tf=True, config=config)`
But there is no _if_ for
`os.path.isfile(os.path.join(pretrained_model_name_or_path, TF_WEIGHTS_NAME + ".index"))`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9242/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9241 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9241/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9241/comments | https://api.github.com/repos/huggingface/transformers/issues/9241/events | https://github.com/huggingface/transformers/pull/9241 | 772,506,679 | MDExOlB1bGxSZXF1ZXN0NTQzNzYwNjQx | 9,241 | Seq2seq trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh and this PR does not delete the old `Seq2SeqTrainer` just yet since I'm planning to merge it just before my vacation. That way if something goes horribly wrong, people can still use the old `Seq2SeqTrainer`. The plan would be to delete it in January after the new `Seq2SeqTrainer` has been tested.",
"> change naming from `max_target_length` -> `max_length`. Think it's clearer this way that the args of `predict` and `eval` correspond 1-to-1 to the `max_length` of `generate()`\r\n\r\n@patrickvonplaten - did you mean to suggest to change the new arguments to evaluate/predict that this PR adds or to rename `--max_target_length`, `--val_max_target_length` cl args?",
"> suggest\r\n\r\nI was more referring to the args of the functions, but more generally I think it would actually be better if there would be only one `max_length` in the data_args - so leave the `max_length` that we have now and completely remove the `source_max_length` argument. IMO, `source_max_length` should be fully defined by the tokenizer of the model. I don't really see a need to let the user define the maximum input length, but this is probably better to be done in a separate PR. On the other hand, we also do have a `max_seq_length` argument in `run_mlm.py` so not 100% sure what's best here...What is your opinion here @stas00 @sgugger @patil-suraj ?",
"I think it's better to keep `max_source_length`, `max_target_length` since in some cases the input length could be way shorter than the tokenizer or model's max length and these two could be used to truncate the text .\r\n\r\nWe can get rid of `val_max_target_length` and `test_max_target_length` in `DataTrainingArguments`, since in almost all scripts we are using the same length for all three arguments (`max_target_length`, `val_max_target_length`, `test_max_target_length`). Then we could pass the same `max_target_length` to both `evaluate` and `predict` methods.\r\n\r\nsorry about the miscommunication.",
"I don't know whether this is a good time, but should we add `min_length` as well here while this part of the API is being redesigned? Surely if generate has `min_length` someone might need to redefine it too? But I'm totally fine to deferring this until and if someone asks for it.",
"The best way to decide about the naming is to show a few use cases - and then it's clear whether these are needed or not.\r\n\r\nPlease don't forget that Sam or whoever added those in first place had a good reason for it, so it'd be hard to make a decision in the void.\r\n\r\nPerhaps such decision shouldn't be rushed - but made into an RFC, invite input from users who have a lot more use cases?"
] | 1,608 | 1,651 | 1,608 | COLLABORATOR | null | # What does this PR do?
This PR graduates `Seq2SeqTrainer` and moves it inside the transformers library. By doing so, it moves some of the features of the `Seq2SeqTrainer` inside the main `Trainer` and leaves some in the subclass. More precisely, the following features will be available in the general Trainer:
- label smoothing is passed to the main `Trainer` (so it can be used in all classification problems), with an easier API, bug fixes (the current implementation did not work with -100 has an ignore index for the loss, now it does; also the current implementation returns a loss that is not averaged thus being too big, see below)
- the ability to pick any scheduler
- the ability to use Adafactor instead of AdamW
The sortish-sampling and predict with generate are left in the subclass.
There are also a few breaking changes in the `Seq2SeqTrainer` API to make its init match the one of `Trainer` mainly the init does not take a `config` and a `DataArguments`, instead:
- the token IDs are taken from the tokenizer
- the arguments for generation are passed to `evaluate` or `predict`
- the `ignore_pad_token_for_loss` is passed in the init but is deprecated since it should be removed once BART and subclasses use -100.
About label smoothing and the mean vs sum. The current implementation takes the sum over the batch size and sequence length (only counting tokens that have a label != -100). This gives something that does not have the same scale as the usual cross entropy loss (which is the mean on the same dimensions) thus would require a special learning rate to be useful. With the fix, the label-smoothed loss had the same scale as the non-smoothed loss, which means the same command with the same learning rate should produce comparable results.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9241/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9241",
"html_url": "https://github.com/huggingface/transformers/pull/9241",
"diff_url": "https://github.com/huggingface/transformers/pull/9241.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9241.patch",
"merged_at": 1608654824000
} |
https://api.github.com/repos/huggingface/transformers/issues/9240 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9240/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9240/comments | https://api.github.com/repos/huggingface/transformers/issues/9240/events | https://github.com/huggingface/transformers/issues/9240 | 772,400,910 | MDU6SXNzdWU3NzI0MDA5MTA= | 9,240 | Help: How to deploy a fine tuned t5 model in production | {
"login": "as-stevens",
"id": 61624036,
"node_id": "MDQ6VXNlcjYxNjI0MDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/61624036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/as-stevens",
"html_url": "https://github.com/as-stevens",
"followers_url": "https://api.github.com/users/as-stevens/followers",
"following_url": "https://api.github.com/users/as-stevens/following{/other_user}",
"gists_url": "https://api.github.com/users/as-stevens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/as-stevens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/as-stevens/subscriptions",
"organizations_url": "https://api.github.com/users/as-stevens/orgs",
"repos_url": "https://api.github.com/users/as-stevens/repos",
"events_url": "https://api.github.com/users/as-stevens/events{/privacy}",
"received_events_url": "https://api.github.com/users/as-stevens/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @as-stevens,\r\n\r\nCould you maybe post this question on the forum: https://discuss.huggingface.co/? We try to move more user-specific questions to the forum and limit Github mostly to bug reports. Thank you!",
"@patrickvonplaten thank you much! I will close this issue."
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | Hi All,
I am trying to deploy a fine-tuned t5 model in production. This is something new to me, to deploy a PyTorch model in production. I went through the presentation from Hugging Face on youtube, about how they deploy the model. And some of the other blog posts.
It is mentioned by HF that they deploy the model on Cython environment as it gives a ~100 times boost to the inference. So, is it always advisable to run a model in production on Cython?
Converting a model in Pytorch to TF does it help and is advisable or not?
What is the preferred container approach to adopt to run multiple models on a set of GPUs?
I know some of these questions would be basic, I apologize for it, but I want to make sure that I follow the correct guidelines to deploy a model in production.
Thank you
Amit | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9240/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9239 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9239/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9239/comments | https://api.github.com/repos/huggingface/transformers/issues/9239/events | https://github.com/huggingface/transformers/pull/9239 | 772,388,198 | MDExOlB1bGxSZXF1ZXN0NTQzNjYzNTE0 | 9,239 | Adding performer fine-tuning research exampke | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger - we want to do some fine-tuning experiments with the new performer model: https://arxiv.org/abs/2009.14794 before adding it to the `src/transformers/`. I think this is a good first place for it where we don't have to be super careful about the API choices yet. Is that fine for you?",
"Yes, this completely works for me!"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
This PR adds a performer fine-tuning research example based on `run_mlm_flax.py`. The user can fine-tune a Performer/FAVOR+ Bert starting from the Bert checkpoint or blank model of their choice, and compare it to a vanilla Bert model, also from a checkpoint or blank.
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9239/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9239",
"html_url": "https://github.com/huggingface/transformers/pull/9239",
"diff_url": "https://github.com/huggingface/transformers/pull/9239.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9239.patch",
"merged_at": 1608581981000
} |
https://api.github.com/repos/huggingface/transformers/issues/9238 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9238/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9238/comments | https://api.github.com/repos/huggingface/transformers/issues/9238/events | https://github.com/huggingface/transformers/issues/9238 | 772,354,054 | MDU6SXNzdWU3NzIzNTQwNTQ= | 9,238 | Bug SqueezeBERT stops with no error | {
"login": "HRezaeiM",
"id": 25917418,
"node_id": "MDQ6VXNlcjI1OTE3NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/25917418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HRezaeiM",
"html_url": "https://github.com/HRezaeiM",
"followers_url": "https://api.github.com/users/HRezaeiM/followers",
"following_url": "https://api.github.com/users/HRezaeiM/following{/other_user}",
"gists_url": "https://api.github.com/users/HRezaeiM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HRezaeiM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HRezaeiM/subscriptions",
"organizations_url": "https://api.github.com/users/HRezaeiM/orgs",
"repos_url": "https://api.github.com/users/HRezaeiM/repos",
"events_url": "https://api.github.com/users/HRezaeiM/events{/privacy}",
"received_events_url": "https://api.github.com/users/HRezaeiM/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Is there a way for you to reproduce this error in a colab notebook? ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu
- Python version: anaconda python 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in the script?:
The GPUs available were :
```
Geforce GTX 980 4gb
Geforce GTX Titan 12gb
```
```
transformers == 4.1.1
torch==1.7.0
torchvision == 0.8.1
```
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): SqueezeBERT
The problem arises when using:
* [ ] the official example scripts: (give details below)
using [Sequence Classification with IMDb Reviews
](https://huggingface.co/transformers/custom_datasets.html#seq-imdb) example I have made my own script
* [x] my own modified scripts: (give details below)
The only changes I have made in this script are
1. to use Yelp dataset,
2. use SqueezeBERT instead of DistilBERT,
3. also do a 5label sentiment...
```python
training_args = TrainingArguments(
output_dir='./SqueezeBERT_10ep_result', # output directory
per_device_train_batch_size=3, # batch size per device during training
per_device_eval_batch_size=3, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./SqueezeBERT_10ep_log', # directory for storing logs
logging_steps=500,
num_train_epochs=10, # total number of training epochs
evaluation_strategy="epoch",
do_train=True,
do_eval=True,
)
model = SqueezeBertForSequenceClassification.from_pretrained('squeezebert/squeezebert-mnli-headless', return_dict=True)
model.num_labels = 5
model.classifier = nn.Linear(768,5)
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='macro')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
print("Displaying model architecture... !\n")
print(model)
print("Training model starting...!\n")
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset, # evaluation dataset
compute_metrics=compute_metrics,
)
trainer.train()
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Using the yelp full dataset
## To reproduce
Steps to reproduce the behavior:
1. Run the scripts as mentioned
2. Reaching Epoch 3, it will suddenly stop using the GPU and although no error is showing up, nothing changes...
3. the Last checkpoint that saves is `checkpoint-310000`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
it should have just go on and finished the training process.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9238/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9237 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9237/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9237/comments | https://api.github.com/repos/huggingface/transformers/issues/9237/events | https://github.com/huggingface/transformers/pull/9237 | 772,296,975 | MDExOlB1bGxSZXF1ZXN0NTQzNTg1OTc2 | 9,237 | Update the README of the text classification example | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
This PR updates the main README of the examples folder and the one in the text classification example to take into account the recent changes in the scripts. In particular, I re-ran the command shown for all tasks with/without FP16 to make a clean table of results.
I moved all stuff about distributed training/TPUs in the general README of the examples as all example scripts now use Trainer, so have this working out of the box.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9237/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9237",
"html_url": "https://github.com/huggingface/transformers/pull/9237",
"diff_url": "https://github.com/huggingface/transformers/pull/9237.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9237.patch",
"merged_at": 1608582221000
} |
https://api.github.com/repos/huggingface/transformers/issues/9236 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9236/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9236/comments | https://api.github.com/repos/huggingface/transformers/issues/9236/events | https://github.com/huggingface/transformers/issues/9236 | 772,292,761 | MDU6SXNzdWU3NzIyOTI3NjE= | 9,236 | mBART finetuned on XSUM | {
"login": "mbelcen",
"id": 26058605,
"node_id": "MDQ6VXNlcjI2MDU4NjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/26058605?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbelcen",
"html_url": "https://github.com/mbelcen",
"followers_url": "https://api.github.com/users/mbelcen/followers",
"following_url": "https://api.github.com/users/mbelcen/following{/other_user}",
"gists_url": "https://api.github.com/users/mbelcen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbelcen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbelcen/subscriptions",
"organizations_url": "https://api.github.com/users/mbelcen/orgs",
"repos_url": "https://api.github.com/users/mbelcen/repos",
"events_url": "https://api.github.com/users/mbelcen/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbelcen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"> For French texts, results are in English and poor summary --> (why the language changes?)\r\n\r\nSince you are fine-tuning on English data, I don't think it will be good at generating french summaries. Probably a good idea to fine-tune with multiple languages.\r\n\r\n> Why facebook/bart-large-xsum understands french (even if bart is trained on English)?\r\n\r\nsince the training data is scraped from the web there is a chance that there could be some french text in it. I think the authors would answer this question better.",
"\r\n@patil-suraj\r\n\r\n\r\n> > For French texts, results are in English and poor summary --> (why the language changes?)\r\n> \r\n> Since you are fine-tuning on English data, I don't think it will be food at generating french summaries. Probably a good idea to fine-tune with multiple languages.\r\n\r\nSo I imagined mBART would be even better at this since even english BART finetuned on Xsum only can summarize pretty well in French. \r\n\r\n However, I don't understand why my model (mbart FT on Xsum) gives out of context results (in any language) like I mentioned above? I followed the steps at https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md and made very few changes.\r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.1.0.dev0
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201216+cu110 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: distributed
### Who can help
mBART: @patrickvonplaten
examples/seq2seq: @patil-suraj
## Information
Model I am using: mBART
The problem arises when using:
* [ X ] the official example scripts:
I used the official seq2seq training example here:
https://github.com/huggingface/transformers/tree/master/examples/seq2seq
* [ X ] my own modified scripts: (give details below)
my training script is as follows (no changes to finetune_trainer.py):
```shell
python -m torch.distributed.launch --nproc_per_node=2 finetune_trainer.py \
--data_dir "./xsum" \
--output_dir "./my_models" \
--overwrite_output_dir \
--model_name_or_path "facebook/mbart-large-cc25" \
--fp16 \
--freeze_encoder \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--learning_rate=3e-5 \
--do_train --do_eval --do_predict \
--evaluation_strategy steps \
--predict_with_generate \
--n_val 1000 \
--max_target_length=60 \
--val_max_target_length=60 \
--test_max_target_length=100 \
"$@"
```
The tasks I am working on is:
* [ X ] XSUM
## To reproduce
Steps to reproduce the behavior:
1. follow https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md
2. launch training script with my modifications (batch_size, freeze_encoder, max_target_length ..)
3. inference on two texts (french and english) using the following code:
```python
def sum_mbart_xsum(text):
print("---------------------------------------------------------------------------------")
print(" MBART large xsum ")
print("---------------------------------------------------------------------------------")
tokenizer = MBartTokenizer.from_pretrained("/home/mohamed/Desktop/Summarization/mbart-xsum")
model = MBartForConditionalGeneration.from_pretrained("/home/mohamed/Desktop/Summarization/mbart-xsum")
article_input_ids = tokenizer.batch_encode_plus([text], return_tensors='pt', max_length=1024, truncation=True)[
'input_ids']
summary_ids = model.generate(article_input_ids,
num_beams=6,
length_penalty=1.0,
max_length=142,
no_repeat_ngram_size=3)
summary_txt = tokenizer.decode(summary_ids.squeeze(), skip_special_tokens=True)
return summary_txt
```
## Results
1- eval/test results:
```jsonc
{
"epoch": 3.0,
"test_gen_len": 28.1,
"test_loss": 1.7692,
"test_n_ojbs": -1,
"test_rouge1": 32.7618,
"test_rouge2": 12.022,
"test_rougeL": 25.6512,
"test_rougeLsum": 25.6499,
"test_runtime": 2778.8939,
"test_samples_per_second": -0.0,
"train_n_ojbs": -1,
"train_runtime": 94633.1507,
"train_samples_per_second": -0.0,
"val_gen_len": 28.0,
"val_loss": 1.7993,
"val_n_ojbs": 1000,
"val_rouge1": 32.9862,
"val_rouge2": 11.528,
"val_rougeL": 25.6517,
"val_rougeLsum": 25.7055,
"val_runtime": 267.0092,
"val_samples_per_second": 3.745
}
```
2- Inference:
* Out of context summarizations (gives sth related to training data) --> (sth wrong with my finetuning configuration or inference function?)
* For French texts, results are in English and poor summary --> (why the language changes?)
## Expected behavior
My main objective of finetuning mBART on Xsum is to evaluate the multilingual level of mBART. Basically answering the following question: Should I finetune on a dataset with multiple languages to be able to summarize in multiple languages? Or the multilingual characteristic is preserved with mBART and just finetune on english (xsum) only
Current problems:
1- (inference results and questions above)
2- Why `facebook/bart-large-xsum` understands french (even if bart is trained on english)?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9236/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9235 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9235/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9235/comments | https://api.github.com/repos/huggingface/transformers/issues/9235/events | https://github.com/huggingface/transformers/issues/9235 | 772,269,693 | MDU6SXNzdWU3NzIyNjk2OTM= | 9,235 | run_mlm.py crashes when saving model checkpoint | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"We don't have your tokenizer, so the reproducer you give us does not work. I tried on my side to run the same command with a saved tokenizer and a saved config file and it works without any trouble.\r\n\r\n> Moreover, it's completely unnecessary to save the tokenizer in trainer.py, as the tokenizer is already trained and doesn't need to be saved again.\r\n\r\nIt is necessary to allow users to resume training from the latest checkpoint.",
"Yeah, it's necessary to allow users to resume training, but that concerns the model only, not the tokenizer. The tokenizer is trained prior to training the model and doesn't change during training. I'll upload the tokenizer so that you can reproduce the issue. ",
"I'm trying to post my tokenizer but Github doesn't let me, it has too many characters....\r\nCan I send it to you via email ? @sgugger ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.0.1
- Platform: Google Cloud
- Python version: 3.6.10
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?):
- Using GPU in script?: NO. Using TPU
- Using distributed or parallel set-up in script?: YES
### Who can help
@LysandreJik @mfuntowicz @sgugger
## Information
I'm trying to train al Albert model from scratch, with a custom tokenizer, on Google Cloud TPUS. The problem arises when saving the model checkpoints, more specifically when trying to save the tokenizer. I'm using your example script run_mlm.py.
The problem arises when using:
* [ ] the official example scripts: run_mlm.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Masked Language Modelling.
## To reproduce
Steps to reproduce the behavior:
1. Run run_mlm.py with the following params: python transformers/examples/xla_spawn.py --num_cores 8 \
transformers/examples/language-modeling/run_mlm.py \
--model_type albert \
--train_file texts_train.txt \
--validation_file good_texts_valid.txt \
--output_dir modelo_prueba \
--tokenizer_name ./tokenizadores/definitivo \
--overwrite_output_dir \
--line_by_line \
--pad_to_max_len \
--do_train \
--do_eval \
--evaluation_strategy steps \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--learning_rate 1e-3 \
--max_steps 500 \
--save_steps 100 \
--save_total_limit 15 \
--overwrite_cache \
--max_seq_length 512 \
--eval_accumulation_steps 10 \
--logging_steps 100 \
--config_name ./config/albert-base-v2.json \
At step 100, the following error arises:
```
INFO|trainer.py:1141] 2020-12-21 15:46:34,157 >> Saving model checkpoint to modelo_prueba/checkpoint-100
[INFO|configuration_utils.py:281] 2020-12-21 15:46:34,158 >> Configuration saved in modelo_prueba/checkpoint-100/config.json
[INFO|modeling_utils.py:741] 2020-12-21 15:46:34,556 >> Model weights saved in modelo_prueba/checkpoint-100/pytorch_model.bin
Exception in device=TPU:0: expected str, bytes or os.PathLike object, not NoneType
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm.py", line 405, in _mp_fn
main()
File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm.py", line 379, in main
trainer.train(model_path=model_path)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 777, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 848, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 869, in _save_checkpoint
self.save_model(output_dir)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 1135, in save_model
self._save_tpu(output_dir)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 1157, in _save_tpu
self.tokenizer.save_pretrained(output_dir)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1972, in save_pretrained
filename_prefix=filename_prefix,
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py", line 524, in _save_pretrained
vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/models/albert/tokenization_albert_fast.py", line 252, in save_vocabulary
if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/posixpath.py", line 378, in abspath
path = os.fspath(path)
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
## Expected behavior
The expected behavior is that the script doesn't crash. Moreover, it's completely unnecessary to save the tokenizer in trainer.py, as the tokenizer is already trained and doesn't need to be saved again.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9235/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9234 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9234/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9234/comments | https://api.github.com/repos/huggingface/transformers/issues/9234/events | https://github.com/huggingface/transformers/pull/9234 | 772,139,778 | MDExOlB1bGxSZXF1ZXN0NTQzNDYwNjM1 | 9,234 | Fix TF template | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Fix the TF template for the new einsum dense layer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9234/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9234",
"html_url": "https://github.com/huggingface/transformers/pull/9234",
"diff_url": "https://github.com/huggingface/transformers/pull/9234.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9234.patch",
"merged_at": 1608555137000
} |
https://api.github.com/repos/huggingface/transformers/issues/9233 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9233/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9233/comments | https://api.github.com/repos/huggingface/transformers/issues/9233/events | https://github.com/huggingface/transformers/pull/9233 | 772,085,090 | MDExOlB1bGxSZXF1ZXN0NTQzNDE5NDE4 | 9,233 | [MPNet] Add slow to fast tokenizer converter | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9194
This PR adds a converter from slow to fast MPNetTokenizers. This way fast tokenizers can be correctly serialized and loaded again. To prevent future issues like #9194, we should maybe think about not allowing to add a "FastTokenizer" without a corresponding converter...what do you think @sgugger, @LysandreJik, @thomwolf ?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9233/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9233",
"html_url": "https://github.com/huggingface/transformers/pull/9233",
"diff_url": "https://github.com/huggingface/transformers/pull/9233.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9233.patch",
"merged_at": 1608561695000
} |
https://api.github.com/repos/huggingface/transformers/issues/9232 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9232/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9232/comments | https://api.github.com/repos/huggingface/transformers/issues/9232/events | https://github.com/huggingface/transformers/issues/9232 | 772,081,249 | MDU6SXNzdWU3NzIwODEyNDk= | 9,232 | command line_by_line missing in https://github.com/huggingface/transformers/tree/master/examples/language-modeling | {
"login": "TalitaAnthonio",
"id": 25078987,
"node_id": "MDQ6VXNlcjI1MDc4OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/25078987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TalitaAnthonio",
"html_url": "https://github.com/TalitaAnthonio",
"followers_url": "https://api.github.com/users/TalitaAnthonio/followers",
"following_url": "https://api.github.com/users/TalitaAnthonio/following{/other_user}",
"gists_url": "https://api.github.com/users/TalitaAnthonio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TalitaAnthonio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TalitaAnthonio/subscriptions",
"organizations_url": "https://api.github.com/users/TalitaAnthonio/orgs",
"repos_url": "https://api.github.com/users/TalitaAnthonio/repos",
"events_url": "https://api.github.com/users/TalitaAnthonio/events{/privacy}",
"received_events_url": "https://api.github.com/users/TalitaAnthonio/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Pinging @sgugger here - think he knows `run_clm.py` best",
"`run_clm` does not have the `line_by_line` option as it doesn't make sense for causal language modeling: pretraining for causal language modeling is done by concatenating all available texts separated by a special token, then building sequences of a certain `block_size` with them. Using `line_by_line` and keeping the sentences separate result in the model having to predict the padding token quite often (which pretrained causal models usually don't have) and without knowing when to stop predicting that padding token.\r\n\r\nOnly `run_mlm` keeps that option as it makes sense to have sentences of different lengths for masked language modeling. You can always copy the relevant bit of code in `run_mlm` to use it in `run_clm` but I would strongly advise against it.",
"Thank you for your reply, it makes sense. So that means that I need to give the script the concatenated data (and separate sequences by a special token)? Or does the script ``run_clm`` take care of that? ",
"The script takes care of that for you :-)",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"@sgugger sorry to revive that thread.\r\nI think a clm with two special tokens BOS / EOS would make sense to be trained in line by line mode, what do you think ?\r\n(btw are you saying that all pre-trained gpt2 models are trained in fixed blocks ? if so do you confirm that original papers do the same when benchmarking with standards like 1BW ?)\r\nthanks for your insight."
] | 1,608 | 1,650 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 3.5.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@LysandreJik @patrickvonplaten @TevenLeScao
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
https://github.com/huggingface/transformers/tree/master/examples/language-modeling
## To reproduce
In the old version of the script ``run_clm.py`` called ``run_language_modeling`` there was an argument ``line_by_line`` which allowed to read the data by putting each sequence on a line. This argument seems to be missing in the newer version
``run_clm.py``.
## Expected behavior
Perhaps there is an argument that has replaced ``line_by_line`` but I don't really see that. Sorry if I missed something.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9232/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9231 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9231/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9231/comments | https://api.github.com/repos/huggingface/transformers/issues/9231/events | https://github.com/huggingface/transformers/pull/9231 | 772,013,281 | MDExOlB1bGxSZXF1ZXN0NTQzMzYxMjEx | 9,231 | [T5] Fix warning for changed EncDec Attention Bias weight | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | In this PR: https://github.com/huggingface/transformers/pull/8518 a bug was fixed that removed an unnecessary weight from the T5 Cross Attention layer.
In the following, this layer was added to the wrong "ignore_weight" list. This weight will never be missing since it doesn't exist in the model anymore it can only be "not used" since it's still present in saved checkpoints. This PR fixes the incorrect warning by placing the regex in the correct list. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9231/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9231",
"html_url": "https://github.com/huggingface/transformers/pull/9231",
"diff_url": "https://github.com/huggingface/transformers/pull/9231.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9231.patch",
"merged_at": 1608543694000
} |
https://api.github.com/repos/huggingface/transformers/issues/9230 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9230/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9230/comments | https://api.github.com/repos/huggingface/transformers/issues/9230/events | https://github.com/huggingface/transformers/pull/9230 | 771,946,388 | MDExOlB1bGxSZXF1ZXN0NTQzMzA3Mzcy | 9,230 | add base model classes to bart subclassed models | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
This PR adds base model classes for `MBart`, `Pegasus` and `Blenderbot`, and adds these in `MODEL_MAPPING` `dict`.
This will enable to load these models using the `AutoModel` class and `pipelines`.
Right now these models can't be loaded using `pipeline` since pipeline relies on the `AutoModel` class.
https://github.com/huggingface/transformers/blob/a4b21cdd20328f71448123ce7c962a78a5d75612/src/transformers/pipelines.py#L105-L110
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9230/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9230/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9230",
"html_url": "https://github.com/huggingface/transformers/pull/9230",
"diff_url": "https://github.com/huggingface/transformers/pull/9230.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9230.patch",
"merged_at": 1608560806000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.